 I welcome to my session and hope you are having a great time in Detroit. I would like to dig an angle and I'll be talking to you about container on time. First introductions, I am a developer evangelism program manager at KitLab. Mainly my role is to find new ways in creating content and also helping my team create the right content built off leadership and get results from the first leadership. Now, I'm also a Google developer expert in cloud and a CNCF ambassador. I'm currently based in The Hague Netherlands and if you want to learn more about me you can visit my website at abuango.me. Now, so let's get started. What are containers? This session is meant for people who are new to containers. I would like to learn more about how containers work in the background so I'll be starting with the assumption that the person listening to this does know a lot about containers. Now, traditional deployment of applications, once you've built your software or your team have developed that awesome projects you've been working on, they can either deploy it by just installing it. Oh, create a VM, install a party or install nginx and ship the application to the right folder and it runs. Sometimes it works, sometimes it doesn't. And we always have this saying, oh, it works on my computers. But how do you ensure that something that works on your computer also works when it goes to production. Then came virtualization. Virtualization is where there is a layer and extra layer on top of the operating system, which is the hypervisor. And the function of it is to create isolated environments, virtual machines that contain operating system binary and everything necessary for your application to run. But this is like a virtual machine. It's completely isolated from the host operating system, and it has to come with everything that application is included operating system. Also, except that if certain configurations are made, the best environment virtual machines do not have access to the host and the host also does not have access to the content in the virtual machines except if some modifications are made to it. Then we have, and this isn't always the right solution to problems in the organization. That's when containers came up. Now containers in terms of Linux have been around for some time and there's been different ways of isolating different processes on the Linux machine so that it function without interrupting other processes or having access to resources of other processes. So now, but after so many years, containers has evolved to being able to deploy an application with all the dependencies it needs in a container while still having access to the host resources or the host resources, the host system is able to access the resources on the container. So this happens instead of in virtualization where you have a hypervisor or a software that manages the creation of virtual machines. In terms of containers, because containers are mainly a group of processes that are running on the host machine but isolated from other resources. There's what we call a runtime, container runtime that talks to the kernel, uses the kernel to create this isolation for your containers and all the resources that your container needs to function. The main difference between virtualization is the virtual machines are completely isolated from the host machine for virtualization while for containers. Your containers are just a group of processes running on the host machine while isolated from other group of processes that are running. Now, if we look basically how containers are created is a client talks for most of us are used to Docker desktop, Docker CLI, where you just see Docker run, Docker EXET, Docker and so on and so forth, Docker build. And the client communicates with your Docker client application or CLI communicates with the runtime. The runtime is the one that does all the building of the images, all the building of the containers and communicating with the operating system, the kernel to do what is required for the kernel to run. Now, when your container images are built, your client also communicates with the registry where images are stored and can be pulled from anytime they are required. Basically, an image is a specification of how your container should be built when it was to run. Now, but how did all this content actually start? It has started a long time ago when Linux administrators have been looking for ways to isolate processes from one another. There should be files that a certain process has access to, should be isolated. The process should not be able to access other files. The resources to use it should be isolated to it without having access to what other processes need. And this has evolved right from the days of free BSD to C groups, control groups being created at Google to limit resources that processes can use on the system. Up to the creation of namespaces at Red Hat in 2008 to IBM creating LXC. LXC was the initial kind of the OG Linux container initiative that was created that uses C groups namespaces to isolate what a group of processes can have access to. Then came Docker in 2013 and Docker made containers cool, it made containers to be more easier to use. You just need to download the binary, set it up, you have the Docker CLI client, you have Docker demonware in your system and it creates the containers. But one thing Docker did very well, which later became a disadvantage kind of is it made it very, very easy to use that when the use of container evolves to more complex use cases like container orchestration in the server environment doing deployment at scale, it was difficult to adapt Docker to those use cases. Yeah, it's easy to use as a developer to build things, but when to automate it and add it to more complex deployments like Kubernetes, there has to be a lot of overhead in trying to make Docker work the way those platforms are supposed to want to use Docker. So which is why there has been a rise in different implementations of containers aside from Docker. Now, let's look under the hood a bit of what a container actually does in the background. This container basically is just an isolation of processes, a group of processes that are running to execute or to run your application, you have your application, you have the dependencies that your application is to run. So those dependencies running to me that have been executed to run your application runs as processes on the host Linux environment. So it has an isolation for example, what files does he has access to. If it's, if you're, if within your container CD slash is executed, where should it present as the root file system of your operating system. The home that actually is, if someone within the container says, okay, CD and tile symbol which is home, where should it go to, should it go, it won't definitely go to the home of the host operating system of the root user of the host operating system, but a folder has been designated for that group of processes for your container to be seen as its home. Now, and all these happen with the use of features of Linux, which are namespaces, C groups, and CH root will be looking more into those. Now the first one is CH root. Now, when a process is running on your system, on your Linux machine, CH root can limit where it accesses on your file system. Let's say for example, in the slide image you're seeing here, we have our main root directory slash, and we have home, we have me, then within me we create, let's say we created a folder called test. And within that test folder, we now created other system folders like being, user, and var. Then we jail a particular process into the test directory. Now, jailing the process into that directory means anytime the process tries to access a root directory, it doesn't go to the parent root, it goes to its own new root, which is basically inside tests, not the main root. So this is one way to make sure processes or groups of processes are limited to a particular directory or section of the file system. They won't be able to have access to other parts of the file system to make any modification. Now, this is not like the most security, because definitely there are ways, given the right amount of skill or given the right amount of programming, processes can definitely jump out and access the host file system. Now, here is another representation of it. So we have our root directory at the top, then we have a home, and a folder called Joe inside our home directory. But we want to jail, let's say the user Joe or the process Joe to the Joe directory, you should not be able to access any other thing. Then we create a new root inside it, its own being user and other necessary folder like we have for the root directory. Now, so anytime Joe or the process Joe tries to access root or tries to say slash being or tries to say slash versus us slash whatever they are limited to everything inside the slash home slash Joe directory. Now, another feature of Linux is namespaces. Like I mentioned earlier, namespaces was created was added was a feature added to the kernel in version 2.6.24 in 2008. Now, what this does basically is to give your process its own system view of certain resources within the Linux environment within your system. Okay, for example, like the PID process ID, like the PID namespace, when your we already know that PID one is the system D or the init process that's running on your system. Now, what becomes the init one of your container. So your container can have its own PID namespaces whereby if there's a, if you want to check on PID one, it does a specified process that is running PID one. For network resources like devices, ports, etc, there can be an isolation of, okay, this is the ETH one or it is zero off your container and not necessarily tied to the main ETH zero or ETH one of the host of system. So basically namespaces provides an isolation for your container or your group of processes to tell them that, okay, these are all the resources you have access to. And these are all the resources within your own environment, like your own namespace that you can use. Now you have other namespaces like mount point, you have IPC, you have a host and domain name namespaces that containers can take advantage from the namespace feature of Linux. Now, then the next thing is C groups, which is for resource control. Now, we've said from the beginning your container is basically a group of processes that are executing towards a common goal, maybe running your specific application. Now, your container, the process within your container can be allocated, oh, you also share one gigabyte of memory and this number of CPU. Now the C groups ensures that those processes stay within the limits that is specified for them. If any of the processes uses more memory than allowed and they get killed, or M killed out of memory killed, or if they start using too much CPU than required, they start getting slowed down. So C groups allows you to specify resource limitation for the set of processes within your container. And the image used here from Julia Evans, she creates very awesome representation of common of terms of topics in technology, you can visit on Twitter to check out some of the awesome illustrations that she's that they have done. Let's revisit this image of a container, how a container runs, basically you have the clients, you have the runtime, and you have your registry. Now, but what happens under the hood in the runtime? Now, let's take this image for example, for that of Docker. Docker, one thing Docker did at some point was to look at, oh, there's a need to have a more representation within the community and one way they can do is to give back to the community. They decoupled their Docker binary into container runcy and contributed to the community through the open container initiative. I will talk about more about that later. But basically, the Docker binary was split into, you have the clients to which is the Docker that you execute. Then you have a runtime, a high level runtime that receives all the instructions from the client to handles image creation and management and so on, then passes the instructions to the low level runtime, which does the actual execution with the Linux and how to create the container. The low level runtime actually does the creation of the container processes. Now, container D is like one of the most common high level runtime which Docker uses and runcy is like the default, most other high level runtime run times are either deprecated or not really that usable. The container runcy is like default that is being used and run it under OCI, as it's like the main OCI runtime. Now, if you look at this illustration here, it shows where when you execute Docker or Docker combos, the instructions are sent to Docker D. And Docker D is like the container engine that's receiving instruction and passing it down to container D. Between container D, you have container D machine, which abstracts some of the low level interactions that needs to be done with the main runtime, which is OCI runtime, that's runcy. And runcy at the end of the day creates all the Linux processes that is required for your container to run. Now, as time went, the community decided, okay, let's have a standard. Different tools are coming up on creating our managing containers. Let's have a standard that is agreed on. Okay, this is what a container should be called. These are the things a container should do. This is how a container should be created and so on and so forth. Then came the open container initiative. Now, what should a runtime be? It should create, it should start, it should get the state, it should kill, it should delete a container. And how should it delete it? What are the different ways it should do? So the container, open container initiative, which has a lot of people within the community, a lot of companies within the community contributing to it, specifies what a container is and what a container runtime can do. And some of the awesome things that have come out from the open container initiatives are runcy, which is like the default runtime that does all low-level creation of containers. It's very hard for you to see someone using runcy themselves on your system. It is usually used along with a container manager like container D or Docker itself. And this ensures, having this standard ensures that containers work the same anyway, without your laptop on the server or cloud. So it works on my system. If it works on your system, it should work anywhere else. Now, then if we look at the CNCF landscape, there are a couple of container run times that have graduated. We have container D, we have the CRIO. CRIO is the Kubernetes interface that was created for runtime interface that was created for Kubernetes to ensure that it's able to communicate with different run times. Now we have a couple of others. I'm going to be looking at some of them and they create and handle containers differently. Now, the first one is container D. Basically, it manages the complete container life cycle of the host operating system, which includes like image creation, storage, supervision and execution and everything around networking that ensures that a container works. And it does this by talking to the low-level runtime, container runtime, which is runcy. Now, runcy basically is the low-level CLI tool that does all the spinning and rolling of the containers according to OCI specification. That is standard OCI specification for creating containers so that if a container is created with Docker or with Buildout with any runtime at all, it should work regardless at any big level. Now, runcy was initially developed by Docker like I mentioned earlier, but it was donated to OCI as like the first runtime spec compliant as the OCI created like a specification, runtime specification. And this is like the first one that is compliant with the entire specification. Now, then we have pre-IO. Now, container runtime interface. Now, I don't remember the meaning of the O, but yeah, pre-IO. Now, it's an implementation of the Kubernetes container runtime interface, so which enables any OCI compatible runtime to be able to work with Kubernetes. You can say, oh, I want to use container D or I want to use this or I want to use that in my Kubernetes cluster. I want to use Docker. So this container runtime interface allows you to be able to use previously Kubernetes was tied to Docker, but with the container runtime interface of Kubernetes, it allows you to be able to choose which runtime to use. Oh, you want to use runcy or you want to use gvisor in the background or you want to use something else in the background to run your containers, you can do that with pre-IO. Now, we also have podman, which is a container manager. But unlike container D or Docker, it also run pods, a feature that we all know for, that we all know Kubernetes for. It run pods and another thing that it does, it allows you to be able to run rootless containers, containers that run within the context of the currently locked in user and accept if it has limitations, by the way, where there are certain activities that your containers cannot do except when you specify privileged flag to allow them to then execute those commands. But if your use case doesn't require any privileged use cases, podman is like one of the features for you that is more secure because it doesn't need any root level access. Now we have builder, builder basically is for building containers. It doesn't run, it doesn't do any execution, it basically builds your container images. You can use podman with builder. I think podman actually uses builder in the background to build images. Now we have a scope view, which is for inspecting images and also publishing your images to a container registry. Now, looking at RunSea, we've talked before about RunSea, but basically the goal of RunSea is to make a standard for how containers run anywhere completely. And the primary features of RunSea is it supports a full Linux lame spaces. All Linux lame spaces RunSea supports with a PID or network command point, it supports all of them at the low level. And also support security features like SC Linux, App Ammo, and so on. And because the specification for RunSea is governed by the Open Container Initiative, which is a Linux foundation, there's guaranteed that it will still be maintained. Unlike other ones like RKT and so on, which were popular for some time and most of them have been deprecated, this has a lot of backing with it and it's going to be around for a long time. Now, like I mentioned earlier, it was released as part of Docker container platform, but was later spinned out. I think Docker contributed a lot to the container ecosystem. Now, we have other runtimes that do things a bit differently. Like we said, containers are not necessarily isolated completely from your host operating system. They are just a group of processes that are executing your program, but in an isolated environment. But for security concerns, some folks have shared the concern that, okay, still yet processes within containers can go beyond isolation and probably wreck some havoc or escape from the isolation. So that is where sandbox container runtimes comes in, like the GVISO or CARTA, and the way it does is they use virtualization to completely isolate your containers from the host. So largely the main reason behind this is to have improved security, especially for critical workloads where you have concerns around, oh, this should be secure, it should be complete, especially in regulated environments where security is very, very important. But unlike simply just using virtualization, these runtimes are lightweight and have minimal resource footprint. Okay, it's not like it just goes create a battery box machine and run your container and it's no. The implementation is much more lightweight because you need to maximize resources and also have improved performance. Now, with this, let's see if you use cases, the main use cases for containers. The first one is build systems, of course. Okay, as a developer, you want to build your applications, you want to package your applications around them. And then to run your applications to assist the containers, but most likely push it to these containers or use Podman to run them as pods. A pod is a group of containers that are running to run a particular application. Now, and what are the main things you would require build systems for? Okay, to either build your images to run your images out to distribute your images. But when it comes to container orchestration, the images will have already been built and they've already been distributed into your registry. So the container orchestrator basically just pulls and uses, in the case of Kubernetes, cubelets, does all that, pull the image and use the image. Now, let's see an example of the flow of a build system. Now, you have your image registry where you keep all your images and you have your local registry on your system. When you do Docker pull, it pulls those images to your system. Now, if you, when you're using Docker, your Docker CLI with the Docker daemon. Now, the Docker daemon does takes your request from CLI and handles it with the container the runtime and container the runtime creates the containers with the help of currency, which talks to the Linux kernel. But Podman does things differently. Instead of having a daemon that does all the execution as a root, as a root user, at root level, does all the execution creating the containers, etc. Podman can work rootless without privileged access. But instead of using a daemon, it creates all the containers as child processes of Podman itself. So that means your containers can run in the context of the currently logged in user instead of root. There are limitations to that. You can explore them more depending on your use case. But this makes Podman to be more secure and more accessible to use. Now, also, Podman can be a drop in replacement for Docker. You can install your Podman and even create an alias to any time you execute Docker, it should call the Podman executable in your system. Now, Podman is part of a suite of tools, including Builder, including Scorpio and Konmon. You can use all of them together. While Docker provides everything in the daemon, the daemon checks your container for the entire live circle. Okay, it's running, it reports back the status and everything. Podman works with RunC to create the containers. But it uses Konmon to monitor the containers and handle any communication between the container to not tell you, okay, this container is dead and so on. Now, then Podman uses Builder to build and distribute images and also Scorpio too for inspecting those images. So, Podman, Konmon, Builder and Scorpio are a suite that you can use together for your containers. Now, for container registration, let's say you're using Kubernetes, for example, and with your Qubelet, you can decide, oh, I want to use Qubelet, I want to configure my Qubelet cluster to use container D, or to use Creo, or to use Kata, or to use GViso. Because Kubernetes has the container runtime interface, so any OCI compliant runtime can be blocked to the CRI for Kubernetes to use in running your containers. And it does that with RunC, just the way Docker uses Docker D to work with container D to run your OCI compliant image. Now, basically, the runtime you use depends on what exactly your use case is. If you are a developer, Docker should be fine for you, as long as you are within the limits of usage that Docker specifies. The using Docker as a company is becoming more difficult because there are restrictions. I would advise visiting Docker's website to learn if you are able to use Docker. But if you are unable to use or you want to stick to the spirit of the community, you can choose Podman. You can even use Podman to run your pods, not just containers, unlike Docker where you cannot run a pod. Now, also Podman can be a drop-in replacement for Docker, basically creating an alias for it, all that you need to Podman create, Podman, a Docker pool, Docker disk, Docker, Podman can do that. Now, but Podman is part of the streets. It uses Konmon in the background to handle the monitoring of your containers and also communication. Docker does everything, but because Podman is demo-less, doesn't use a demo, it depends on a couple of other tools to do all the activities needs for the entire lifecycle of your container. Now, Podman also uses Builder in the program to build your images and scope you for inspecting your images. So you need to use the full switch to be able to get the full benefit of the entire lifecycle. But if you work in a regulated environment, or because of this some regulation or just your need, you need a more secure runtime to run your images in a completely isolated environment. Then GViso, Carter, can fire cracker, an option you can look at, and you can even make your Kubernetes cluster to use any of these sandbox runtimes to run your application. Now, when you are using, also know that if you are using Kubernetes, you can specify what, because Kubernetes, previously Kubernetes basically was tightly coupled to Docker. After the community, the coupled Docker, which will give, which led to the uproar, one time, ah, Kubernetes is removing Docker in ETC. No, Kubernetes was not just removing Docker, but it was making it possible for different OCI compliant runtimes to be able to work with Kubernetes, including Docker. So, if you want to use a container with Cubelet, fine, you can set it in Kubernetes, you can check the Kubernetes documentation for how to configure different runtime for your Kubernetes cluster. Now, that's all from this session. Thank you very much for joining. If you have any questions or you want to engage with me, I will be waiting to answer some of your questions in the after this. But you can also reach me at on Twitter at 30247. My website is abuango.me. Also, if you would like to give feedback about this session, you can scan this QR code or follow any other instructions given after this session. Thank you very much for joining me and hope to engage with you soon.