 Perfect. Well welcome back everybody and everybody who's on Facebook. We're going into our second part of today's OpenShift Commons gathering and we're going to kick it off with the state of the container ecosystem with two of my fellow Red Hatters. Many of you have heard of them, Dan Walsh and Monroe Patel and we're really pleased to have you all back and we're going to try and stay on time today so I'm going to let them get started so thank you all. Okay. Can everybody hear me? Okay my name is Dan Walsh. I run the container team at Red Hat. I have, I now work in the OpenShift division. I used to work in the Rel division up to about a month ago. So we started a, one of the things at the low level of containers that I've been fairly depressed with is how little advancement we've made over the last few years in containers mainly because there's always, you know, there's only one, people think that there's only one way to run containers and containers, actually there's a few t-shirts running around here called containers as Linux and Linux as containers on the shirt. And containers is just a Linux concept. It doesn't have to go through one, you know, just one way of doing containers. So what we started working on about a year and a half ago was trying to break apart what it meant to run a container or how you want to run a container and really, so what do you need to do when you want to run a container on your system? So the first thing you have to have a definition of what a container is or what, you know, what the content of a container is. And this is actually the biggest contribution the Docker made to the ecosystem is basically they got everybody to standardize on this, this one way of bundling up an image or a group of software they're going to install in your environment. And luckily over the last couple of years there's been a standardization effort on that bundle to make sure that everybody agreed to continue to use that bundle. One of the things I've always feared since I started working with containers is that we'd have a bifurcation. And actually it started to happen about three years ago, Coro West decided to, they wanted to standardize on what they call the App-C spec. And they wanted to standardize on an image bundle that was different than what Docker was doing. And so we began to see sort of the RPM versus Debian thing happening all over again. So people would have to pack it software in different ways. Luckily all the major players in the container world got together and they worked on a standard and that actually went 1.0 last December and it's the OCI image bundle spec 1.0. So when I want to run a container I have to be able to identify the container, the container sits in the container registry. And everybody pretty much, the funny thing is in the competitive business of containers, container registry is really where everybody competes, right? There's hundreds of different people doing containers. Each one of the big cloud vendors does a container registry, Red Hat, we have an OpenShift registry, there's obviously Docker IO, there's Quay from Coro West, it's out there. So there's lots and lots of competitors at the container storage. But basically they all store the same thing, these image bundles. So the next thing I need to do to run the containers, I need to be able to pull the container image from the registry to my host, okay? Can everybody tell me how they can do that? And everybody in this room is going to say Docker pull. There's only one way to do that, Docker pull. In the whole world there's only one way to do it, Docker pull. That sucks, right? What is the container registry? It's a web front end, it's a web service, right? I should be able to do it with Curl, right? I mean it should be able to do it with any tool to pull off of these images off of bundles. So we started working a few years ago on a tool called Skopio, we're going to talk about at the end. But Skopio ended up evolving into this thing called Container's Image. So we actually built that Go library called Container's Image, if you go to GitHub Container's Image, you'll find it. Lots and lots of people contribute to it. And now we have ways of moving images from container registries to other container registers, from container registry into local storage, different types of things. So we needed a standard wave of implementing pulling and pushing, pulling and eventually pushing images. And that's what container images. The next thing you need to do to run a container is actually take that bundle that you pulled down using Container's Image and now you need to explode it on disk. But you have to put it on a sort of a different type of disk, it's called a copy on right file system. Okay, if you think about what when we have these image bundles, they're layers, right? We have a first layer, the second layer, the third layer. So these are layered file systems that you have to put it on top of. In these layers, eventually you'll get to a writable layer. So it's copy on right, which means I can write, I feel like I'm writing to the layer, but I'm actually writing to a different place. Copy and write file systems, the things like overlay, device mapper, or butterfs has a version. So there's lots and lots of copy and write file systems. And the only one that had a copy and write file system was inside a Docker. So the only place to store it was inside a Docker storage. So we decided to create a library called Container's Storage. Right, we took basically took the code and most of it was written by Red Hat originally, that was in containers in Docker, pulled it out and made it into a library so people could start storing copy and write file systems on the disk. So lastly, you need a standard mechanism for running. How do you define what it means to run a container? Okay, and that really has to be standardized just like the bundle. So you need to standard what it means to run a container and what it means to store the container. Luckily, OCI runtime spec. Okay, also 1.0. What that does is it has a JSON file that basically says what's going to happen in this container? What the environmental variables are? What kind of security constraints are on it? What is the entry point? What is the current working directory? All those things are written inside of a JSON file. And then you have an exploded file system next to it, which is a root of s. Okay, run C is the default implementation of the OCI image spec. Clare Linux containers is another implementation of the OCI image spec. Clare Linux happens to use KBM for isolation. Run C uses namespaces for namespaces and C groups on the local file system. Other people are building, also building OCI image specs. Dockerson's 1.11 has been exacting run C underneath the covers, and so all containers are launched with run C. So if I have these four components, right, nowhere in these components do I need a big fat demon. Okay? If I have four components, I should be able to do these things from the command line, right? I should be able to do each one of these steps from the command line without having a big demon in the way, right? And one thing that drives me crazy in the container world is people keep on putting up demons. You need a demon for everything. Everything's a client-server operation, but in a lot of ways I would just like to have it, you know, a lot simpler. Basically, you know, these are just processes on a Linux system. I should be able to just exec them. So you're at the OpenShift conference, Kubernetes, KubeCon. This week, we wanted to talk about Kubernetes. So what does Kubernetes need to do to run a container? Well, first of all, if you look back in Kubernetes, it was originally developed on top of Docker, okay? They built into Kubernetes the entire Docker API, talked to, you know, how to talk out the containers. And here again, CoreOS actually caused some problems. Good problems. CoreOS came along and said, we're going to write a huge amount of patches to Kubernetes to make Kubernetes work with Rocket. And Kubernetes basically looked at this huge amount of patches that they were going to have if that, if, you know, I'm using C code, but basically say if then, if running with Docker do it this way, if running with Rocket do it this way. And they said, well, we can't support that kind of code. So what they said, what the Kubernetes guy said at that point is we're going to define our own interface and then allow anybody to build a container runtime for that interface. And that's called container CRI or container runtime interface. So, so what they went back to Rocket and said, you guys take the rocket, build a rocket demon that implements the CRI implements our protocol, and we will gladly talk to you. At that point, Docker, they started internally to create what's called the darker shim. Darker shim is basically putting all of Docker CLI calls behind a CRI. So now Kubernetes now talks to this this protocol, the CRI protocol, and you can support multiple different container runtimes. So a year and half ago, back in September, Ronald up here, one of my best engineers came along and we said, let's do a Skunkworks project. Let's see if we could build, using those four components that we built, let's see if we could build a little tiny demon, not a big fat demon, hopefully a thin demon, that would implement those four things. Build a store in image somewhere, pull the image, store on disk and then create a run C configuration, very similar to what Docker did, and that's what we call, well actually originally we call it something different. This press got a little bit, and all of a sudden it became RedHouse walking Docker. That's not what we did. What we wanted to do was implement a workflow for Kubernetes running. So we ended up calling it cryo. So I'll let you take a So what is cryo? So from the name, you can see it's an OCI-based implementation of the Kubernetes CRI. So we took all the components that Dan talked about and created cryo. So what is the scope of cryo? Exactly what Kubernetes CRI needs. We don't add any more code than what the CRI needs. So it's like nothing more, nothing less. Just implement the CRI in each version of Kube whenever there are some changes to the CRI. We pick up those changes and we implement them in cryo. Only supported user is Kubernetes. We don't serve to support any other demons or any other orchestration tools. And we try to use standard components wherever possible to implement cryo. So in addition to the components that we already went over, these are some other components that we use for cryo. So the first of these is the OCI runtime tools. So Dan talked about RunC and how it needs a config.json. So the OCI runtime tools has a library for generating those configurations. And it's a project under open containers. So anyone who wants to use that library is free to do so. And it keeps in sync with RunC. So we use that to just generate config.json for running the containers in cryo. And then for networking, we ended up using CNI. It has kind of become the default networking solution everywhere. All companies that provide container-based networking solutions have a CNI plugin. We have tested it with a bunch of popular plugins like Flannel, Veeve, OpenShift, SDN, and Calico. And all of them just work. And finally, last but not the least, is Konmon. So Konmon is like a monitoring process. It monitors each container. It's a small, tiny binary that we wrote and see to be efficient, so it doesn't use a lot of memory or CPU. So it monitors the container for exit codes. It handles the logging. So CRI defines a format that each container runtime is expected to write out the logs in. And so Konmon is a component in cryo that does that for us. And then it also handles TTY. So whenever you want interactive terminals, Konmon is responsible for reading the master PTY from the container and copying data back and forth. It serves the attached clients. And finally, it detects and reports Oom. So whenever you do QT status, and if your container went out of memory, you'll be able to see that. So let's take a look at what a pod looks like with RunC. So you have a pod, which is the holder of the C groups, IPC net, and optionally the name spaces and newer versions of Kubernetes. And then within a pod, you have the infra container, and then the actual application containers that are specified in the pod specification. And for each one of these containers, we run Konmon. Konmon is small and efficient. And it uses C and shed library, so it doesn't have a lot of memory overhead. So this is the overall architecture when using Kubelet with cryo. So on the left, you see Kubelet. And it's talking over gRPC APIs. CRI is basically a gRPC API. And it has two different services, the image service and the runtime service. The image service is responsible for listing the images available locally and also pulling images. So whenever you, in a pod spec, you specify some image, Kubelet uses the image service to make sure that the image is present locally. If it's not, it makes a call to the pull API, which then cryo implements pull using the containers image library that we mentioned earlier. And then for the runtime service, we use the OCI Generate library for generating the config.json. We use CNI for hooking up networking for the container. And finally, we use the storage library for creating the root file system for the container. So you have cryo running on the right and then it's launching pods, depending on what the Kubelet requested to do with the CRI API. So cryo allows Kubernetes. What do we mean by that? So each pull request that goes into cryo passes all the Kubernetes tests. We don't merge any pull request if it breaks any Kubernetes tests ever. So we run more than 300 tests for each pull request before it gets merged into cryo. So again, in the theme of only supported user as Kubernetes, so never break Kubernetes. So what are the versions of cryo that are out there? So the first version was a 1.0, and the latest release there was 107. And we wanted a 1.0, and that corresponded to cube 1.7. But in keeping with the theme that we want to be tied to Kubernetes, after that we jumped our versioning to match cube versioning. So each version of cryo after that is easy to match up with cube. Like cryo 18x supports cube 18x. Cryo 19 beta was released last week. And once cube 19 is out, cryo 19 will be out as well. And what about OpenShift? So cryo was shipped as a tech preview in OpenShift 3.7 on rel. So OpenShift container platform will be moving to 3.9 afterwards, but Origin still has a 3.8 if you want to use that with cryo 18. And we'll be targeting that for OpenShift online to deploy cryo to OpenShift online as a first step. And again, OpenShift 3.9 will have full support for cryo as a runtime. And our goal is for 3.10 to fully support cryo as a default option in OpenShift. Can anybody tell me what version that will be of cryo with Kubernetes 1.10? Yeah. We're stalking Kubernetes. There's no confusion. We just picked the matching version, and it should work with cube. And we have maintainers and contributors from a bunch of companies, Red Hat, Intel Suzy, and many other contributors. And so I'm going to demo this tomorrow. Come to my talk to KubeCon, and we'll go over cryo in action. OK, so that's cryo. Pretty quick, huh? Pretty cool. It just basically, traditionally, Kubernetes has had a problem with Docker changing out from underneath it. So every version of Docker is broken Kubernetes. So what we wanted to do when we built cryo was basically to say, whatever we do, we can't break Kubernetes. Kubernetes is the thing that's important here. It's not, so basically when Docker 1.8, Docker 1.9, Docker 1.10, Kubernetes always trails behind them. As a matter of fact, Kubernetes right now only supports Docker 1.12. They're about to move up to Docker 1.13. And at that point, Kubernetes basically is sort of saying that might be the last version of Docker that they'll support going forward. And even Docker is moving away from Docker, they're moving to, I think, all container D and using cry stuff of that. So we talked about OpenShift uses Kubernetes, but OpenShift does more than just use Kubernetes for using containers. It builds containers. So the second part of this is OpenShift needs the ability to build a container image. It needs the ability to push container images around the environment. So can anybody in this room tell me a way of building a container image? Docker build. Can anybody tell me a second way? Source damage, and what's that built on top of? Docker build. Anybody else tell me a different way? Ain't that depressing? 405, you know what a Docker image is? Or an OCI image? It's a tar ball in a JSON file. I could build a shell script to build a tar ball in a JSON file. That's all it is. But four years in, we have 200 people in front of me saying that the only way to build these things is using one tool with a big fat demon. OK? Last year when we were talking about this stuff back at DevConf, a fellow worker of mine decided that he would build a tool to demonstrate it. He started in the morning. And at the end of the day, he wanted to build a tool. He decided to call it something to make fun of my accent. Because I said, I told him, why don't you build me something that will build a container? Why don't we just call it a builder? And he said, OK. And that's a Boston Terrier is the dog now. Now we have to change the icon. I'm going to ruin everybody in this room when they look at that icon. He looks like he has tidy whiteies on his head. We've actually changed the icon, but I don't have the new one. OK. So what is Buildup? Buildup is a command line tool, no big fat demons, that builds containers. You can do a builder from, pull down an image from a container registry. Guess what it uses under the covers? Containers image. What does it build on top of? Container storage. Unpacking on top of it. And you just say builder from Fedora. It creates an ID of container. At that point, you want to mount the image. I want to write to the image so I can do a builder mount. When I build a mounted, it returns where it mounted it. At which point, you can copy, use any shell script, any tool in the known universe to copy the files. If it runs on Linux, you can use it to move content into it. So you can DNF install. You can use the copy command. You can do make install. You can do anything you want to put stuff inside this image. When you're done, you can do a builder config to set those special environmental variables, entry points, things like that that they're associated with the image. And then you can do a builder push to push in anyway. So you can build standard bash scripts. Instead of having Dockerfile, it's the only way to ever build a container, which is a very bad version of bash. You can do this. You can decide when to commit, when to patch. If you want to use keys from your host, you can use that. So builder, pretty cool tool. Pretty cool little tool. And we also support build using Dockerfile. So you can actually use a Dockerfile, hit it to build it, and it will build it. But we don't like typing that in, so we call it buildabud. An eyes of bush is not responsible for that name. So you can actually build using Dockerfile. And basically, we'll build a container image. And you can push it and do anything you want, all on top of container storage. As soon as you're done building the container image, Cryo can use it. Because guess what? Unlike the Docker, unlike standard Docker, where you have big, fad container to the image controlling the storage in the image, we can shear storage in the image between multiple different processes. So builder can go and build a container and have it instantaneously available to Cryo. And we're going to talk about a couple other tools that also uses. So we can actually share storage between multiple things. This file system. Can you imagine a file system sharing between processes? What a novel concept we've come up with. OK? We're working on OpenShift. So the next version of Source Stream Image, hopefully by this summer, will actually use builder under the covers instead of just implementing Docker. So as we go out right now on OpenShift Online, we're actually using Docker for builds. And we're using Cryo underneath the covers. As we move forward, we want to basically get an alternative to doing and so using builder. The goal with builders is actually to make it less privileged, require less privileges. Right now, it still requires the same amount of privileges to build a container, but hopefully in the future, we'll be able to trim down the amount of privileges required for builder. So what else does OpenShift need? Well, one problem with Cryo is that it doesn't have everybody that goes into a Kubernetes environment right now. If something goes wrong, they get onto the box and they execute the Docker commands. They do Docker PS to see what's running in the pods. They do Docker images to see what images are downloaded. So we needed tools to be able to do similar activity. So we're introducing something. I'll refer to it as Kpod today, because that name has been rejected, but that's what we all call it. And legal has not come back with the term that we can call it with. And we have it as part of the live pod effort. And the naming is working its way through legal. But Kpod is a tool for managing pods and containers based on the Docker CLI. So we know that you guys all understand that the way I list images, I type in Docker PS-A. I list containers. If I want to list all containers, if I want to list a short name, I do dash E. So all this knowledge has been built up on the Docker CLI. So we decided to build our thing called Kpod and use our own special CLI. So we have Kpod PS. We have Kpod Run. We have Kpod Exec. We have Kpod Images. So really, we're very creative in what we call these things. But basically, Kpod is an entire Docker CLI type environment that executes pretty much the same environment. But guess what? No big fat demons. So when you execute a Kpod Run, the process that's running the container is a child of the client. It's not connecting to a big fat demon some way to run it in a different environment. So you can actually start to build smarter environments. And guess what? Kpod shares container storage with cryo. So if you want running a cryo environment, you can run Kpod PS. And it will show you all the images that are running in it. But it's in a separate process. You can launch Kpods. You can launch Builder to use these things. But basically, they all can share the same storage. What we want to do with Kpod in the long run is actually allow it to have full concepts of pods in it. So it will basically advance past just the Docker CLIs to the point where we can actually join containers to pods and start getting creative about what it means to be a pod. And this is all part of what's called the live pod project. So we want to build a library for managing pods that cryo and other tools can start taking advantage of. So here's the grandfather of them all, Scopio I have to mention, especially where we have Antonio, the creator of it. So Scopio, this is the last tool. And I think I have seven minutes left. So I'm racing through these. So Scopio, Scopio might be the most popular one of our tools out there, but nobody talks about using it. But a lot of people are. So Scopio is actually the original CLI that containers image was based off of. Scopio in Greek means remote viewing. So a little history on Scopio. A few years ago, we wanted to be able to go out to a container registry and actually look at the JSON associated with an image. The only way to look at the JSON associated with an image in the Docker world is actually to pull the image to your host and then you're allowed to look at the JSON. We had a problem with that because some of our images, frankly, are huge. So pulling down a half a gigabyte or a gigabyte of disk to your system, just to look at the JSON and say, well, that's really not what I needed, and now I'll remove it, seemed like a waste of bandwidth. As opposed, we wanted to basically go out and get the JSON and just pull that down. So we actually built a patch for Docker that basically said, instead of Docker Inspect, it said Docker Inspect dash dash remote. And Docker rejected and they said, you should go off and implement that on your own. They said, it's just simple web stuff. So just implement it on your own. Don't be adding new patches to the Docker CLI. So Antonio here said, okay, I'll do that. The problem is he didn't stop at that point. He said, well, if I'm going to pull down the JSON, I might as well pull down the image, too, inside of my tool. And they said, well, if I pulled the image, I might as well push the image. So he continued to develop on this thing. So he actually built the Scopio tool that does a really nice job of pulling and pushing images. With Container's Image, he can actually pull an image from one registry and push it to another registry. So now we have Windows ports of this tool that are actually moving registries around. This thing, I want to grab my phone that's telling me I have a meter. Okay, so Scopio was able to move, but Scopio didn't, Container's Image has actually got rather creative. Container's Image supports Container Storage. So I can pull an image out of a registry and push it directly into Cryo Storage or Builder Storage or K-Pod Storage. I can actually pull in and push it directly into Docker. So I can pull an image from Docker IO and stick it into Docker's database. Okay, I can push it to a directories. I can push RLCI images in Docker V1 images all with the Scopio tool. So if you're looking for a tool to manage moving these images from registries to local storage, sort of pre-loading systems, that's what a lot of people are using Scopio for. So that's a quick view of what Scopio was able to do. And I'm gonna end now. We can do questions. I got five minutes left, right? Well done. This talk usually takes an hour. And that's how much caffeine he's had. Do we have any questions in the audience here for Dan on Monroe or Mr. Scopio? Nobody is in quite, they thoroughly understand everything. All right, yes. I can hear you. Who was that? Raise your hand. Just stand up and shout it. No, make Diane run. Diane, with the best, is gonna get awfully hot. Are we gonna be able to interface with cryo as unprivileged users? He's still getting it. Can you say it one more time? Are we gonna be able to interface with cryo as unprivileged users? Are you gonna be able to implement, you will be able to, anybody that talks to a container registry as a regular user, you're talking about like doing Docker socket to allow it, well, let me tell you about exposing the Docker socket to a user, okay? Just give them pseudo window password and turn off logging, okay? Because you have basically given them full route to the system. So anytime I can talk to, anybody that can talk to a container registry, I mean a container runtime, you're giving them full route to the system. So yeah, if you believe that you should allow your users to have full route to your system, then give it to them. There's no additional security built into cryo over what's built into Docker, okay? So again, the only thing cryo does is implement what Docker wants. Whether or not we'd be, allow you to use K-part in the future to use, say use a namespace to do it, that would be something we might investigate in the future. There is a tool called Builda, kind of bubble wrap that actually implements some of that. And if you follow a flat pack project, it allows you to do some stuff. But right now, we're not doing anything special in cryo, any of this stuff that's gonna not require route. So I would prefer you to use Sudo to set up those environments. Someone else have a question? Yes? One in the back. Windows support. Do you not see the shirt? No. No. If Windows wants to come, Windows has come in, Windows engineers or somebody's come in and given us patches to make scopio to work on top of Windows and on top of Macs. So it's all open source. We've actually talked to Windows about potentially using some of this technology, Microsoft I guess I should say. So, but we're not doing the work. So I would love to have them come in and join us on. And Microsoft supports a lot of Linux, so I'm sure they'll be running it in their Linux environments. Anybody else? Kind of a detail thing. The conmon, is that a bit like an init container for a pod? So conmon isn't like an init container. It is like a small process which is apparent process of the container. And this is because the way OCI has implemented the separation of create and run, we need something to actually monitor the process. And that's the role that conmon does. So you still can have your own init inside a container, but to monitor the container itself from outside, you need conmon. Yeah, so when I run a run C container, run C actually starts pit one and then goes away. What happens is conmon launches run C and stays around running, basically listening to stand it in and stand it out from what ends up being pit one of the container. And it sits out there, if anybody wants to connect to it or attach to it, then it can give back that control of the terminal. Therefore we can, something Docker used to not be able to do was restart a Docker daemon would take down all the containers. We can restock cryo and because conmon is sitting out there basically holding the containers open, it'll continue to run. That's the way I like to think about conmon. Anybody else? If not, you're right on time. I'm right on time. Thank you very much, folks. Thank you.