 At the start? Apparently no. I don't. Alright. I think we can start. Let me welcome Antonio. He's a doctor during the Fedora, and he will talk also about documentaries. So, welcome everyone to the talk about CRIO, or Cryo, or CID, whatever. Briefly about me. My name is Antonio. I'm also known as Ranko online. Sometimes even offline, and then called me Scopio once also. I'm a software engineer at Red Hat for the project atomic team, specifically at Container Security. I'm my permanent trainer upstreaming docker. I'm on CRIO, Container Seamage, Scopio, and I have a CID Fedora. Dockering Fedora. So, I'll try. I don't have a microphone. Alright. So, in this talk, I'll go through why we are working on Cryo. Its architecture, its core components, a sort of light demo, which is recorded. How to use it, and some useful links for everyone to start maybe contributing on it. So, Cryo started because Kubernetes has some issues when it comes to container runtimes. Specifically, docker and rocket runtimes interaction are built into the Kubelet, so that they are the only supported runtime right now. And also, the pod concept, the Kubernetes pod concept is interpreted differently between container runtime. For instance, for docker is a container infrastructure, infrastructure container. But for other runtimes like iPair or something like that, it could be a virtual machine. And so, this pod concept is really misinterpreted by runtimes. And since the interaction between runtimes, between the runtimes and the Kubelet is built into the Kubelet source code, it leads to a sort of maintenance burden. In Kubernetes, when it comes to implementing new features, new Kubernetes features, or even modifying them, as that requires to modify each runtime in the Kubernetes source code, which is a really huge effort. Also, lastly, it's not easy at all to plug new runtimes. Right now, you're pretty alone with docker or rocket. This is not much of an idea, but another issue is that there are container runtimes like docker, which are adding new features at every new release in the docker engine, for instance. And that often come with bugs, which leads to Kubernetes being not that stable. So that is something Kubernetes doesn't like, I think. So eventually, the Kubernetes community came up with a proposal of an high-level interface which is able to talk to any runtime as long as there is a runtime to speak this new interface. As you can see right now, the Kubelet talk to docker via the docker API. And so the Kubernetes community came up with this CRI, which is container runtime interface, which is the CRI part of CRIO. And this is an imperative container-level interface that means that all the runtime knows is just how to start sandboxes and pods and containers. So there is no pods pack to be interpreted by the runtime. It's a GRPC API, which is a client-server architecture. All the runtime knows, as I say, this is just to start a sandbox, which can be either a container or a virtual machine. And then the Kubelet will say, alright, start this container inside this sandbox to form a port. This container runtime interface comes with a runtime and an image service, which means the runtime part is for starting pods, starting sandboxes, removing them, stuff like that. The image service pretty much does pooling of images. This should be it. It's an alpha feature from Kubernetes 1.5. Dock and rocket shims are being developed. So I guess it would be easy to swap from the built-in implementation to the CRI ones. Of course, since the interaction between the Kubelet and the runtime is the Kubelet, it's really easy to add new runtimes in this scenario. So CRIO specifically uses the CRI, and as of the runtime, we're using OCI conformer runtimes. And so CRIO, what is CRIO? It's basically the integration path between OCI runtimes and the Kubelet. For those who don't know, OCI is the Open Container Initiative, which is a Kubernetes Incubator project. It's a Linux Foundation project which aims at startlandizing the container world. Let's say there are just, for now, two specifications, a runtime one, which we're using CRIO to run containers, and an image one, which is being developed, and we're going to adopt it when it's stabilized. That's just a question. I'd like to see CRIO replacing Docker in Kubernetes, so we can run every runtime we want. CRIO is specifically built for Kubernetes workloads. Since it's a pre-alpha Kubernetes Incubator project, we're just focusing on running CRIO for Kubernetes workloads. So it'd be tailored just for Kubernetes. And of course, since we have this abstraction, we can plug in any OCI runtime. So let's clarify what's in scope and what's not in scope for this project. We don't want to break, first, we don't want to break backward compatibility when it comes to the container's image, and we work hard to make sure that we can pull images from Docker registries, so we're not breaking that, but we're exploring OCI images and some means to pull those images as well. Speaking of container's image, we support multiple means to download those images, along with optional signature verification and trust management. Of course, we're going to implement the full container's image life cycles, like pulling them, storing them, creating a rule-fed system out of them, and the container process for a cycle as well, like running containers, running pods, sandbox, and stuff like that. Right now it's just copying all right, but we'll see. Yes, soon. So what's not in scope, luckily, I'd say, is building containers, signing images, pushing them, we're not doing the stuff which Docker usually does. So there's likely no way to break that much when it comes to adding new features. And we're also going to not provide a CLI utility for interacting with Cryo. The only means to speak to CRIO is just the Cubelet for now. Let's... Yeah. All right, that's a huge topic. No format. I mean, that's probably wrong. Yeah. Yeah. That's basically it. So before diving into Cryo architecture, there are those two pieces, which is the Cubelet and the OCIC, and OCIC, which is the small client, and this is the only way to talk to Cryo. So as for Cryo architecture, we have OCAD, which is the server, which is listening to our unit sockets and takes requests from the Cubelet and OCIC for runtime and image management. We have Common, which is a small application. We'll see later in the slides. OCAD is to start containers. We have, for the image installer layers, we have those two libraries, which are containers to image and container storage. We're using the CNI, which is the container network interface to provide network to pods. It's also using Kubernetes, if I'm not wrong. And finally, we have the OCI runtime, which can be any runtime as long as it conforms to the OCI runtime specification. Oh, one by one. OCAD is the main demon, which acts as a GRPC server for the CRI and it provides both the runtime service and the image service, which means, as I said, you can serve requests for image management and container management. It's worth saying that a container life cycle isn't tied to OCAD because OCAD won't be the pattern process for the container itself. That means we can restart OCAD and container will keep running. And so eventually you can bring up OCAD again because the state will be restored and you'll have the state there. Next we have the image and storage layer. As said, CRIO leverages two libraries, containers to image and container storage. Containers to images, and then pointed out this morning, is a Go library for copy and converting images. It supports also signature verification and trust management. We use it, of course. We use containers to image to pull images. And then we have container storage, which is our way to store those images. Container storage does that by storing those images on a copyright file system for now. And it also takes care of... I mean, container storage takes care of creating root file system for the container when... I mean, to run the containers. And it also stores images on this. Next we have Colman, which is a small standalone C application which is sitting in between OCAD and the runtime. It tags as a shim, and each container has its own common lightweight process, which acts as a direct parent for the containers. And that's how we achieve, like, OCAD restarting and not losing... I mean, not bringing down containers when it restart. Colman is also responsible for keeping the I.O., logs, and the master PTI of the container open. It's responsible for recording the container exit code, reading the container's processes when the container exit. Finally, Colman also proxies the container I.O. to remote clients, like the Cubelet over HTTP. That's for features like Cubelet-Exec or Cubelet-Attach. It's worth noticing again that the container life cycle, as I said, isn't tied to OCAD, but the container, the OCI container is common. And it's not OCD, so we can bring that down containers with the primary. In that case, I'm not sure. I've never tried that, but I believe maybe the container goes down as well. Probably. So, network is provided by the CNI, which is also used in Kubernetes, as I said. One dive into how it works, is actually plugin driving. And usually, what it does is just, for the basic case, we use it for creating web pairs and assigning them to the container and to the OCI. There are many plugins we can use like bridge, loopback, IPvlan, Macvlan, this is the core, I'd say, of the project, which is the OCI runtime, and it's really any OCI as long as it adhere to the runtime specification. Our default is RunC, which comes, which is the reference implementation for the OCI runtime spec, and it has some cool things, like saccons and Inux and stuff like that. We're not limited to this, as I said, so we can run like OCI, any OCI runtime, like clear containers, RunV, many more. It's fully swapable, so as long as it adhere to the OCI runtime specification. So enough. No, the the question was if the Kubelet knows which runtime we're using, right? No, there's no way to know that other than providing the Kubelet the runtime version. I mean, that could be expanded, I believe, so we'll work on that because you just start the Kubelet saying that right. For this Kubelet, just listen to his cryo socket, which will provide everything through the container runtime interface. Let's say, in our theory, let's see how those things work together with the log of Kubernetes cluster running an engine in Xpot. I recorded the demo. All right. This is tiny. Yeah, yeah, no, this is tiny, but don't worry we're just starting OCI. With a runtime right here, I'm starting the Kubelet saying that it will listen on the CRI with the experimental CRI environment variable. So this will bring up the Kubelet and say that it will connect to cryo, which we started before. So the Kubelet is up and now I'm increasing the font size. So alias and so the node is up and it's running in Kubernetes 1.6 all post up. There are no pods there are no pods there are no run C containers Docker isn't even running so let's try run an engine in Xpot which will take a huge amount of time but we'll skip. Cryo right now what it's doing is creating a sandbox for this pod sandbox received and stuff like that it's creating the pod, it's pooling the NGNX image, it's preparing the root file system, it will create the container via the runtime and it will start it and profit maybe. So I'll post that. So you can see at some point we were pooling the image we pulled it, we created this container and we started it. We can see that the pod is running so we'll find its IP address and we'll test this one in a browser you can see this is working and the container runtime basically exposes via the CNI in the network and so we can connect to that. The funny thing is we still don't have run C container we don't have Docker at all. What we have for this demo we run cryo with cloud containers which is another OCI runtime, their OCI runtime and you can see since clear containers is OCI conformant we can run this underneath cryo and the cubelet and the thing is clear containers is running containers as we know them like C groups and S phases what clear containers is doing is creating a virtual machine underneath starting those two process and providing those to the runtime specification I mean this is the standard way to output the container and so this demo was I don't know I don't think you can because you need some work patches good question don't they have like something to throw you so we can see and genics is running and I stop recording so back to those lines so the demo is online on YouTube so feel free to socialize it please if you want to try CRIo there are quite a few to do that you can play around with YoastAC I said that's not supported but you can use that you can bring up a local Kubernetes cluster or a 40 node Kubernetes cluster and try it that will be helpful for us there are packages I didn't say that this morning ruining my talk for our 26 RPM Opensuse RPM are coming soon hopefully we have Opensuse maintainer which is alpinus and probably is already creating those CalCI tower did a great CRIo tutorial which is running CRIo on Google Cloud right now we parted that tutorial to our readme so you can find it in the readme for our information so if you want to get in touch with us there is our github website and the repository we have our community we have we have we have a community meeting on Thursday but we're sending out information about it every week we have maintainers from China as well so we're trying to come up with a reasonable time for everyone we're excited we have many contributors for many of the top Linux and open source and container companies like SUSE, Intel, Huawei, Hyper, IBM you need them it's worth saying that CRIo is really under active development we got the pod running with the Cubelat but we're missing some tiny features we're going to tackle next month in the next month and we have a current roadmap right now we are targeting at running the full Kubernetes end-to-end test with pretty much good coverage so we're doing that right now future plans include stuff like Kpod which is I mean to run Kubernetes but without going to the CRI at all and yeah we are missing some other stuff with logging but we already have requests for this I guess that's it thanks for coming right so the question was how CRIo pulled the image right so sorry and where it from right and where it from so CRIo pulled the image a container image which is our libraries we use inside CRIo to provide the functionality specifically pulling the image so there were no host names so by default the pooling went to Docker IO Docker app and pulled the NGNX official image any other I guess the question is for OpenShift build as well so the question was if you can run pods using RunC along with pods using cloud containers for instance so sure we can do that we're waiting for you to open a request you can do that yes I don't really know the question is when Kubernetes is going to adopt CRIo well I don't know at this point it's still a Kubernetes alpha incubator project so I don't really have a due date for it I'll let you know kind of so the question is how the Hubelet know if the node is capable of running containers of what kind of this operation can be done so since there is a container runtime interface sitting in between there is a contract between the two and so as long as CRIo provides implement all of these methods in the CRIo you're good you can just the Hubelet is just asking CRIo to create a sandbox it goes back to the Hubelet CRIo start this container in the sandbox goes back to all the containers everything is ok that's a good question yeah something like that any other question any other alright I mean you need some specific pipeline for my presentation just put the presentation there there are no three Google how old till Sunday till happy to talk to you all right we'll pick up more