 Okay, okay. So my name is Luca. I work at CoroS in Berlin and I'm actually the leader of the containers team and Today we are going to talk about something which is the topic in here containers And especially like you have a cluster Kubernetes cluster, and you have your containers And what do you want to do with that running it? Fine. We're gonna touch into the details the internal stuff. So how Kubernetes is actually running your containers So first things a step back In Kubernetes, you're not actually running row containers. You're running pods pods are the basic unique of Execution and they tie together multiple containers in a single service Then Kubernetes stays care of basically scheduling them and doing some other task around like How to recover from failure and how to scale your containers your pods in your cluster Cool. How does this work? There are basically three main entities here First you have a cluster of course and there is a schedule in this cluster Which is the source of trot of trot which is taking care of scheduling your pods around Then you have workers and on each of these workers There is one entity called the cubelet which is a long-running demon a service Which is there and it takes care of receiving order from the scheduler And then there is another entity which is the container runtime, which is kind of an abstract concept Which is the one that is really running things. So whenever you say Kubernetes I want to run this pod Kubernetes goes the scheduler takes some decision and then delegates to the cubelet Cubelet is running on some specific node and then the cubelet goes and say look container runtime Which is typically Docker I want to run this container all together and then Docker goes and it runs actually your containers So it's pretty pretty easy To visualize it a bit better it works in this way there is always Some distributed key value store which typically it's at CD and there that's where the logic where the Scheduling part is happening then there are components which take care of Forwarding and propagating this information to other components in this case There are the scheduler and the API service and then you have multiple workers with these cubelets Which receive this kind of order and just execute the container and the pod We didn't start the timer, but it's okay Same seeing it as a hierarchy Actually, you have a cubelet on whatever node. Let's say it on node one and then You actually have this container runtime running for example Docker and then your pod get executed with all the resources inside of it So in this case, let's say that you have your service in there Which contains two application application one, which is the main application and some side car fine I think it's pretty clear up to a year From here on we go more into the technical part In reality when you are running your Kubernetes node, you have all these entities which are in there Typically you run it with Docker which means that you have a cubelet which is receiving some order and then Forwarding this order to the Docker demo the Docker engine Which then in terms as some internal components for example container D which take care of some other part of its execution And then internally delegates to something else for example to run C in order to run containers And then there is some magic going around it in order to group this container together This is let's say historically Kubernetes Then a chorus we had some kind of troubles Let's say and we also had some kind of opinions those opinions were There are too many components in here some of them until recently were not supporting auto blades checkpointing and restoring and most of those are like long-running services And they are tied to the lifetime of your container of your pods So we come up with an alternative which is like, okay Let's write a different container runtime So we keep everything the scheduler and the cubelet as it is and we just swap the bottom layer of it And that's where we are the rocket we implemented rocket and then we integrated it with Kubernetes All we did it is basically with a single runtime a single process instead of long-running demon and We also found out another Interesting point which is what do you have already on your system system D? Maybe you don't like it Maybe you like it is there it has interesting feature. So let's use it instead of re-implementing and demons So this is the usual configuration of a Kubernetes cluster running with rocket and then at that point we had some problem like okay But there are some tools that really wants to work with some kind of demon and we don't have any more a demon to provide this information so we also have some kind of Lateral components we just provide this kind of read-only information if you want to query into that Which is fine and then at that point we had like okay already to implementation It's getting the code is getting a bit dirty and messy to follow because there are different logic and different code path fine Let's sit down and have a discussion and this discussion resulting in CRI which is the new container runtime interface Which is being developed right now in Kubernetes the idea is the cubelet really wants some kind of fine-grain controlled on What is going to be executed on every single node and we want to move away more or less to the previous Paradigm where there was just this big sync pod logic routine into something Which is more granular like cubelet tell me which image you want to fetch from where and which container you want to schedule to run in which in which pod which Which resources you want in it and that's it and then we already have like some kind of protocol a standard Standard specification in order to that this component talks together, which is gRPC, which is an RPC mechanism based on protobuf And this is how CRI came to life It is currently experimental, so don't run it in production right now It is targeted for some kind of alpha beta feature in 1.6 1.6 and it will definitely happen probably this year Definitely probably Again trying to visualize it a bit better the idea now is we don't have any more or the services talking together We don't have different code path. There is a single way of talking From the cubelet to the runtime service and actually there are another part which is splitting up the image fetching from the Container running part. All of this is going through gRPC You just have to implement endpoints and being able to speak this gRPC protocol And then something in this case the runtime service will take care of scheduling your pods. This comes with some Other benefits, which is if you really don't like Docker or if you really don't like rocket That's fine. Just build your own implement your own. There are already several implementation for that and they cover different specific Topic, let's say So for example with rocket that was our design that was our approach But then you have like hyper hyper which is taking care of I want to really schedule pods Which are virtual machine and that's it How do you how do you find more about what I just told you? Well, you go online and you join the Kubernetes community that we are part of or you grab me outside So just go on github check the Kubernetes code This is done as part of sig node, which is the special interest group inside the Kubernetes community taking care of all these low-level details We have mailing lists. We have slack channel We have video calls and that's it and chorus is also hiring in Berlin, New York, San Francisco. So grab us