 Let me introduce myself. I am Awuba Korsidik Angu, developer evangelism program manager at GitLab. And I'm also a Google developers part for cloud, things around scalable infrastructure, and a CNCF ambassador. Lately my interests have been in supply chain security and I've been sharing quite a lot of content around that also. I'm based in the Hague Netherlands. Yeah, just around two, three hours flight from Dublin. But hopefully get to see you to next time. You can always check me up on my website, which is on abuangu.me. Now, today we'll be talking about containers and container runtime. What exactly are containers? Now, traditional deployment of our applications used to be, oh, you've written your application, you set up your hardware or your server that you want to use, install your operating system, and install your application. Basically it takes up the whole machine, the whole hardware. That is if you are using a metal server. Now, but things evolve to we have virtualized deployment where you have on your metal hardware, metal server, that's your hardware server, you'll have installed your operating system, then install a hypervisual, it can be VMware or virtual box or KMO or whichever one. And with the hypervisual, you can then create individual virtual machines. Now, these virtual machines are completely isolated from each other, each having its own operating system and the binary is needed for your applications including your application and its runs. This is how most cloud environments work. You can create VMs, you can go on GCP or AWS or any of them, create VMs. These VMs are running on top of a hypervisor that is able to provision those servers almost on request. Now, but things have evolved to we have, yeah, we all know this term of, a, a better application, but it works on my machine. But now, how do we ensure that what works on your machine also works on live environment? Basically, we should just ship your machine. And that is the concept around containers, not that we are going to take your machine and put it online, but the binaries and everything the setup that ensures that the application runs on your system is replicated online and using what we call containers and container on time. Now, the way it works is you have your hardware, you have your operating system installed, then on top of that layer, instead of a hypervisor, you have a container on time. Basically, the job of the container on time is to take the specification that has been set by your application definition, then talk to the operating system to provide the resources that your application needs, to provide the different components that your application would need to run while your application comes with the necessary binary, binary it needs to run and starts running within the environment created for each by the container on time. So basically, your application is not running isolated, your container is not running isolated. It's running alongside other containers, sharing the same resources with the host operating system that is provided. Now, your operating system can even have access to some of the processes or some of the files that have been generated by the containers. And your container can even communicate with one another. So basically, a container is a bundle of your application and all the binaries it needs to run while the container on time ensures that the operating system is able to provide everything that your application needs based on the specification provided. Now, let's go through in the past, the Linux update has been several ways the community has tried to replicate containers. Right from the inception of Linux, there has been ways that, oh, we need to restrict this process or this application to run within a sandbox or within an environment. And there has been different ways to jail processes to certain parts so that there won't be conflict or a process won't tamper with another file or something else that another process needs. And the concepts like CH routes have been coming up, jails have been coming up within the environment right from the early 2000s. And it evolved into C groups where Google introduced, okay, on this server, how do we restrict the amount of RAM, the amount of CPU that a process needs? How do you make sure that a process that is running does not overuse beyond what it needs? So that's where C groups came in, introduced by Google. And more things that were introduced include namespaces. Namespaces involve a group of resources within the Linux operating system. Can be, you can create a set of resources within the Linux operating system that it can be accessed by a process. So, and they can easily be identified that, okay, this process, this is the, maybe host resources or networking resources or any other resources that it needs. And with C groups, you are able to then limit that, okay, this resources that you are consuming, this is the limit you can reach and this is how you can enjoy them. Then there were more advances to using all this growth that have been coming up within the ecosystem like C groups and new spaces. Now applying them to, okay, let's now see, use all this concepts to create containers. That's where it is like the LXC came up. And that is the stage from which Docker became a thing. Now, let's dip down a bit. The concept of CA shoot. Let's add in the image here, we have our root directory on our system, which is the slash. And from there, you have your normal Linux directors like the bean, home, user. Now let's now assume there's a user called me and a folder has been created called test. From this folder, we want to jail a process that is running on a Linux machine to this test folder. And I ensure that any time it tries to access root, instead of taking it to the main root folder, tests, the test directory becomes its root. And all the binaries and files it needs can be referenced from there. So let's say it says slash bean, instead of going to the main slash bean, it stays in its own slash bean, which is actually slash bean slash me slash test slash bean. So this process only gets access to file storage, the file system that has been restricted to it. Now, let's look at it a different way. We have our root folder in yellow, the slash in yellow. Then we have with the regular system files on the file system. From there, we go to home, then we'll go to Joe. But within Joe, we created another slash, another root directory with bean and all the files that regular operating system file. So now if there's a process that has been jailed to the root folder within Joe, any time it tries to access any system resources, it only has access to the one within the Joe home directory. It cannot access the ones outside. This way, a process is unable to tamper with the main operating system or access the files or resources of another process. Now, then another concept that is very crucial to our understanding of containers, its namespaces, there are different resources on your operating system. We have like the PID, which is the process ID. We have network resources. You have users and groups within the system. We have mount point. We have system V and a couple of other things that the system needs like information about the host system, the system's domain name, etc. Now, all these namespaces are different resources, different grouping that can happen within your system. Let's say, for example, we have PID1, PID2, PID3, PID3, PID3, PID3, PID3, PID3, which is the units that starts every process within the system. But we already have a one PID, process ID1. Let's say within your container, within your gel environment, you want to create a new process that will have a PID1. There'll be a conflict with the main system resources. Now, but what if a namespace has been created for your application that wants to run? And everything around process ID, starting from one and so on and so forth, is only limited within that set within that group. The same thing with network resources. Okay, what are the different network resources or users or groups that can be accessed by your process that can be used by a process or mount point? So with namespaces, you are able to create a set of resources that can be accessed like a group of resources that can be accessed referenced and accessed by different resources within your system. Now, another thing is C groups. Now, what C groups brings is setting limits and restrictions. Let's say for example, a process wants to use memory. Hey, it's request for memory and let's say like 10 in the image you can see here, thanks to Julia Eva, she's a very good artist and she check out, she tries to describe a lot of Linux and technology based information in graphical interfaces that you can easily understand. Now, here's an example of them. Let's say a process wants to use 10 gigabyte of memory and the Linux or Pitcher system tells it, oh, I can only give you one gig, it gives it one gig. Then you now have a group of resources, a group of processes, we call them C group. In this group, everyone can then can say, hey, we need resources, but for that C group, you can restrict and say, oh, all three of you can only get 500 MB of memory with set limits. So if for any reason one of the processes now uses a lot of memory, like one gig, it can be queued, OOM, out of memory. And if it is using too much CPU processes, it gets slowed down, its request are no longer prioritized or it's hits by a particular quota. So this way, different components within the system that are grouped together can be limited by how much resources they can consume. This concept is used for containers to say, oh, we have these processes for this particular container. Let's give it this memory, let's give it this CPU and they should not go beyond those resources that has been allocated for it. C groups bring this concept for, container technology. Now, let's now go deep into containers itself. When, from the image here, we are all familiar with Docker. Docker made container cool. Docker made container approachable and usable by everyone, by providing a lot of binaries and other components that users can use. To create containers, oh, you can just Docker image pool, Docker run this, Docker and so on and so forth. And immediately, you can easily create a container. But Docker is doing this by having what we call Docker demon, a Docker engine that is installed on your system that does all the things that you can do that does all the receives requests from you, communicates it to the runtime that is running the container, the Docker engine and tells that tells the operating system, oh, create this container image, do this and do that. Now, one of the, we will have noticed recently the issue in the Kubernetes community where Docker was deprecated from Kubernetes and it became an uproar on Twitter. It's because when Kubernetes was originally developed, Docker was big into it, it was hard-coded into it. But it came to a point that it was adding and they had to introduce a layer within Kubernetes that translates requests and other things from Kubernetes to Docker. And a lot of other organizations started, since it's not only Docker that's available as a container engine, we want to use other container on time that are available out there. It became difficult for Kubernetes to not say, okay, let's allow some of this other container technologies. So that extra layer called Docker shim had to be introduced that then translate all commands and all requests to Kubernetes. So it's one of the reasons why new container technologies and a lot of others that would not be hearing before came up. But if we are looking at what basically happens, when you execute a Docker command in the image here, you will see we have the client tool, which is what talks, what you use, what you call when you are running your Docker command or you need Docker interface. It talks to another tool called container D. And this container D is like the high-level runtime that receives your requests and translates it to tell the low-level runtime, which here we call run C. Which in turn tells the operating system, this is what you need to do. You need to create this container. These are the resources you need to add for it. These are the applications you need to do. Now, from there, your application starts and it runs. So basically container D manages your container, how it's running, how it's executing. But run C at a low level initiates the creation and preparation of the container. Now, Docker made a lot of contribution to the ecosystem. Initially, they decoupled their Docker engine and removed container D and run C and contributed to the community. And this contribution that they made led to collaboration with other entities to now come up with the open container initiative. Now, the open container initiative then created a standard to say, what should a runtime do? What should a container look like? How should they all communicate so that we have a standard across? That way, an application or a service or some orchestration to whatever it is can easily understand any container that is created by any runtime or any clients to. Now, like I was saying in the beginning, Docker was made to be very easy to use by everyone, but it came at a price. Automation became difficult. Several layer of abstraction had to be added before it can be used. So Docker recognized this and that was one of the reasons why they decided to contribute to run C and container D to the community. Now, basically, any runtime that was creating a container can be able to create, start, get the state of it, kill it or delete. And it does all this talking to the container image and the container that is running. This way, with this standard, any container can work the same way, either on a laptop or anywhere. The same standard is maintained. Basically, the OCI is saying this is how a container should be created and this is how it should run and execute. And that standard is maintained across all environment that a container needs to run. Now, there are different types of container runtimes. We have container D, which was open sourced and currently maintained under the CNCF. Then we have the QAIO. Now, container runtime interface. Oh, I can't remember what the O is. This, the Kubernetes community discovered, okay, we need a way to be able to, for the Cubelets component of Kubernetes, to be able to execute container commands with the runtimes in the Kubernetes cluster. Now, okay, let's say a Cubelet has received the specification, deployed this container. It uses QAIO to now say, oh, QAIO created this container for me. QAIO talks to RunC, RunC creates it and deployment happens. Now, there are different types of runtimes from the low level to the high level to those that have specialized functions. Now, if you check the CNCF landscape, you get information about different types of containers that are out there, but we'll be looking at a few of them. At a high level, the most common one that you have is container D. Now, from the images we've seen previously, we see that your clients too, which can be anyone, but in this case, Docker speaks to container D. Oh, I need this container to run. This is my image. Then container D sends it to the low level. The same thing with QAIO, the same thing with Docker and Podman. There are different reasons why some of this were created. Docker engine was like the most popular one before container D. Now, Podman was introduced because most of the others require root level access to be able to create those containers. Now, Podman, because it's only a Podmanager, it only runs at the high level, making requests to the low level container runtime. It doesn't require privileged access. Now, also, at the higher level, the higher level container runtimes are also called container managers because their work mainly is to, oh, you're creating, you're starting, you're stopping, and so on and so forth. Now, at the lower level, we have Run C. Run C is like the standard. Now, it's like the main low level container runtime that most other applications use the program. There are others like RKT and so on, but a lot of the other low level runtimes have been deprecated or no longer maintained. Now, container run C is like the main one that container D and Docker are based on. So let's say you execute a Docker command from Docker itself. Docker talks to container D, container D passes its run C at the lower level. You can even use run C directly, but it's too low level for most use cases, except if you are using a container manager to communicate with run C. When container D says, oh, I need a container, here is the specification. Run C takes it, creates the container and stops and passes the rest of the functionality to container D to continue execution. Now, another type of container that we have is sandbox container runtimes. For the containers created by run C and the low level container runtimes, like we saw from the beginning. Where your containers are not completely isolated. They are just a group of processes that are running to solve a particular task, but sharing the same system resources. And oftentimes there are resources, that's why if I did a jail and they are within namespaces, their resources can, they can share resources and depending on the level of, on what the application inside these containers are doing, they can actually access or affect the host operating system or affect other containers. So there are other solutions that provide an extra layer of protection or like isolation to ensure that applications don't go beyond where they are supposed to work or affect host operating system. This comes in various formats like the device or I think it was introduced by Google and it's only way it works is it creates like a proxy to the canal to ensure that your request and your to create to the request by the runtime to create resources. And all these calls it makes to the canal and send directly it passes through the device or proxy, which talks to the canal that way there's an added layer of security before things hit the canal without escalating issues. Then we have firecracker, which has which create uses came to create like a virtual environment is sandbox environment for your container to run. Then Qatar, basically uses virtual machines, virtual machines are created for your containers. That way, there's complete isolation of your container. There's no way some of the activities it does impact the host operating system or other containers. Now, there are two major use cases for using container runtimes. The first one is built for build systems. As a developer, you want to be able to run your applications, build your setup, create builds for your application or create standard environment stats. Anyone within a company, let's say for example, can easily kickstart a project or start working on a project because they already have a container that contains everything they need. So an major runtime requirements here is to okay, you need a runtime to build your images, you need a runtime to run it and you need maybe a tool or runtime to distribute the images. Then the second use case is container orchestration. These are where, okay, after in some situations, your container might not need some serious orchestration. Basically, it wants maybe your use case is just run your container within your cluster, provide the necessary port forwarding for it. And your application runs maybe on port 80 or port 8080, just a single container or a couple of containers and you are fine. You don't need a lot of scalability. If you need skill, you need a way to be able to create your container images. Pass it to an orchestrator, for example, Kubernetes, and it takes that image, deploys it as a container on the port and it runs. Now, one of the major requirement of container runtimes for the for chest crystals are basically to be able to get images and run your application. The image is not necessarily a function of container orchestrators don't need to create images images will have already been created past to it, and it runs the images. Now, for build systems. This image is not too small. There are the main way that almost everyone is familiar is basically buying by using Docker. Oh, my stylus isn't working. No problem. Now, you run your Docker run image, which is Docker CLI in the left side of this image is the Docker CLI. Now, once you talk to the Docker CLI, which is the client to Docker CLI will then talk to Docker engine. And that Docker engine talks to continuity. Container D then executes, prepares the instruction and passes it to run C and run C talks to the Linux channel. But this is for Docker and Docker comes with a whole suite of tools, the Docker binary comes with the whole switch to put the client, the Docker engine to build the containers it is here and before it talks to container D. But if you are not using Docker or probably due to licensing issues or due to company policies, you need other tools, especially that are open source to be able to create your containers and we have for you for images, talking to image registry publishing to them or pulling from them, we have scope you with scope you can communicate and interact with container registries. Then we have pod man, which is a container manager, you can use it to create containers, you've pulled your image. Now you want to create containers you want to check pods. You want to do ex ex ex is seen to them and so on and so forth. Then if you want to build images, we have builder with builder you can build container images. Let me try that. Oh, it's not working. Awesome. Okay, we have builder and builder here. Build that builds basically builds your images, but pod man is the one that talks with run see build out can build your image scope you can publish it to an image registry, but pod man pulls the image and passes it to run see to communicate with the most kind of. So if you mostly your choice is the way, why do we have different tools for different things. It's basically using the Unix philosophy, doing one thing and doing it well. So instead of a bundle solutions doing a lot of things like the way doc adders. The, oh my my stylus. Okay. The next thing is, but when it comes to container illustration, we looked at, let me use my keyboard, we looked at on the build system. You can use all of this, but on the orchestration side in a Kubernetes cluster will see here that normally if you use Docker, when Docker was being used, it communicates with container D, Docker D, which talks to container D and runs it with run see to create the containers, but Kubernetes has created with Kubernetes is having container runtime interfaces or runtime interfaces are those runtimes that handles the communication between like, like a standard that handles communication between the Kubelet and the runtime environment run see so Kubelet can use container D to talk to run see or use create I create to talk to run see that that way there's no restriction or a specific lock in for a particular Docker clients that needs to be used or runtime that needs to be used, unlike before, where Docker was built into Kubernetes. So now, you as an individual can decide, oh, I need performance speed and so on I want to be able to talk to run see directly. I want to basically a developer, and I want to build my images on my system use build that be you I LDH, or you want to publish your repository and images to after you built with builder, you want to push your images to a content is your pull from it you can use copy you and you want to be able to run your containers build create containers. And you require a container manager, podman does it for you, and podman communicates with container D in your system which in turn talks to run see to build your images. Now, alternatively, if you have to use Docker or you can still use Docker, it's available though there are restrictions on what you can do depending on the size of your company, you can always use Docker but one using Docker is not the alternative, you can use podman with scopio and builder, but on the content restriction side, you can decide to run. Oh, I'm using kiblet with container D or I'm using create create on the record, Kubernetes documentation, you can have several documentation on how you can set this up. Now, if your requirement requires more sandboxing or more restrictions around security on the continent, which is creation, you can use device or use cutter in Kubernetes you can create a runtime class. And specify device device so so that anytime kiblet is deploying containers into a pod, it ensures that they are using device or instead of run see to deploy your applications on Kubernetes. So that's the end of this session. I hope you've learned it, you know, to on different container on times that we have, and how you can make a good choice in which one to use for your project or for your deployments. I'll be available for questions and looking forward to hear from you. You can follow me on Twitter at 30k247 or check out my website abangub.me. Talk to you soon.