 So, hi everybody, and welcome to this Fedora classroom. I'm Alessandro Michielo, Solution Arcade for Red Hat, and today we will talk about containers with Podman on Fedora 29. First of all, who am I? I'm Alessandro, graduated in computer engineering here in Italy. I'm currently working as Solution Arcade for Red Hat, as I said, and I'm very passionate Linux Fun. My first Red Hat Linux installation was at age of 14, and after that I never left and kept using Linux in my home and work life. But what about me and Fedora? I use Fedora as primary operating system for work and personal usage from five years, I think. Yeah, and I love placing stickers all over my laptop and let my friends and colleagues guess the open source project behind the logo. My favorite window manager is GNOME, so don't blame me, and as you will see, I'm currently running Fedora release 29. What about the logo? You will find the Red Hat logo in almost every slide because the content comes from a Red Hat slide deck. And moving forward, we will talk today about Linux containers. We will deep-dive in the containers architecture. We will then introduce the containers run times. We will see some examples of how to pull and run containers through Podman, then managing networking, logging, security, and persistent storage, some basic example, nothing more advanced. Finally, we will also introduce system services in containers. At the end of the slides, I will give you also some links and docs that you further explore for beginning in the word of containers. Starting talking about Linux containers, what are containers? As every time you may ask in the technology, on the technology side, it depends who you ask. On infrastructure side, basically, a container is an application process with a shared kernel that is simple and lighter and denser than a virtual machine. And this is also portable across different environments. On the application side, so looking at the application, the real software you run, a container is just a package, that package in a sort of archive, a target, the application with all the dependencies. With this kind of technology, you can deploy, of course, the container in any environment in seconds. And this could allow actually the easily access of this kind of application through different and shared environments. Going through and so matching what a virtual machine and container is, as you can see from the slide, a virtual machine isolates the hardware. It means that actually starting from the bottom of the stack, so from the hardware, from the real server, your laptop a server on cloud, you may have an hypervisor, so a software stack running virtual machine. And as you can see inside the virtual machine, you have a shared kernel with a shared operating system dependencies. These shared kernel and shared operating system dependencies are used by multiple applications running inside a virtual machine. On the other side, so moving to the right of the slide, you will see that actually the container isolates the process itself, so the application. You have always an hardware, of course, your laptop, your physical server, you have a software hypervisor, then a shared host kernel. In this case, you have multiple container running side by side. Everyone has his own operating system dependencies and his own application. So you can see that you can easily manage multiple operating system dependencies with a shared kernel instead of having multiple virtual machines replicating also virtual hardware. In the case of container, you don't need virtual hardware. Of course, you can use containers on top of virtual machine, but if you want, you can easily avoid the virtualization and the virtual machine. Moving forward, so on the virtual machine side, we have, of course, virtual machine isolation, so you have an entire machine completely isolated, and you can have multiple virtual machines isolated each other. But, on the other hand, you have a complete operating system with a static and allocation for computing and memory with a high-resus usage, because, of course, you have to define, before starting the virtual machine or also editing during the runtime, the quantity of compute and memory. On the container side, you have container isolation with a shared kernel, with possible compute and memory, so you can define it. You can let the container use at runtime, basically, the compute and the memory that they need, with, of course, lower-resus usage, because, effectively, the container may consume more lesser resources. And this, of course, could generate the main differences between virtual machine and container. Virtual machines, in general, are not portable across different hypervisors, and cannot easily use it for a packaging application. On the other hand, instead, containers can guarantee the application portability, because you pack in a hole in one archive operating system dependencies and the application in one container, and then you can move your container from laptop to permetal to virtualization, private cloud, public cloud, so you can easily move. You can build one time your container, your container image, you will see, and then move this container between different environments. Going in deep, so in the container's world, we can say that we can think the container as the smallest compute unit, so the smallest part, the smallest archive that we can put in runtime, but containers themselves are created from container images. Think the container image as the main archive containing all the operating system dependencies, and when you run your containers, it runs starting from a container image. Going in deep, a container image is nothing more than all the libraries, we said, that actually let your application, your service, be up and running, so imagine libraries as GleeBC, SSL, binary packages, including, for example, software for managing those packages, software repositories, and so on, and usually container images are organized in layers, so you can add layer by layer more software to your container image. But, of course, container image could be, again, stored in another element called image registry, where basically you can store multiple types of container image in multiple versions, and then pull your image, your base image, your favorite one from one of those registry to run your container. And, of course, Fedora has its own containers image registry, as you can see, and we will see also in the demo, pointing your browser to registry.fedoraproject.org, you will find a list of all the containers that this registry is hosting. But, moving forward, and so going in deep in the registry servers, as you can imagine, the registry server can offer a set of actions, like find images, run your images, run your build, and also let you explore the various details that an image contains about software, about tags, and so on. So, basically, a registry server can expose a set of actions for better using them. And, as we said, an image repository contains all versions of an image in the image registry. Basically, as you can see in this example, you can have in your registry an image called front end with multiple versions, and other images, for example, a database Mongo, again, with multiple versions. You have also a special tag called latest that identifies every time the latest image you pushed, so you uploaded in your container image registry. But, going forward, containers don't run on Docker. So, we have this myth about the Docker containers. We have to say that Docker is one of the many user space tools and libraries that talks to the kernel to set up containers, and we will see why. Basically, containers don't run on Docker. Docker is just a format that allows us to play and manage containers. Containers are just processes, and they run on a container host. So, containers basically are built of Linux, of an operating system, and they are just processes. Moving forward, a container host, so going in deep, we are going in deep in the technology. A container host is built of a kernel, as we said, and a container runtime. A container runtime is an application that actually lets you manage and play with containers, like, for example, Run-C that Podman uses, and, of course, a container host may also include, for example, another tool to let the container engine or container runtime talk with other components, for example, the orchestrator of Kubernetes or OpenShift. So, the container host is the physical server or your laptop or your virtual machine that actually will spawn new containers. It's just a Linux. It could be your Fedora, Ascentos, and Ubuntu, and Red Hat Enterprise Linux, and so on. But, again, going again in deep, a kernel is just a set of system calls, memory, CPU, devices, drivers, file system. The kernel is the gatekeeper for accessing the resources and data structure. And, of course, the kernel manages different system calls for orchestrating the different processes side-by-side. Creating containerized Linux processes is nothing more than creating regular processes. There is no kernel definition for what a container is. There are only just processes. And what the kernel and container runtimes do is to create an isolated environment that let you, for example, associate a dedicated network, a dedicated process ID, among point, a user that will be used for run the process inside the container. So, basically, on the kernel side, there is no knowledge about the container, but only about a sort of segregation of different processes on the same Linux system. Going so to the, returning back to the container engine. As we said, we have different container engine. For example, Run-C that uses Podman or container D, Docker D. Of course, this container engine provides API that can be consumed by user or other application and prepares data and metadata for a container image to run that container image in a real runtime container. And, of course, this container engine takes command line option for defining the option and the configuration for the container you want to start for pooling, so for downloading images from a container image registry, and then handling all the other information, for example, mounting a file system, ensuring also the isolation, so defining the data structure that the container itself will use. Going in deep in the container runtime, we have to say that there were yearly concerns with Docker, because actually Docker, as we said, is one of the first container engine and format for containers. But, of course, it requires edemon for running containers, for building new containers. It requires, of course, route and privileged access to the runtime. And using edemon on your container host when playing with orchestrator like Kubernetes could be, of course, a single point of failure. So, for that reason, Docker, Red Hat, and other companies, early in June 2015, created two specifications, one for container runtimes and one for image format. This initiative goes through the name of open container initiative. And this is how basically the runtime can create a file system bundle and the image format, how to create an image itself, so a container image. So, the runtime defines how the container should be run and how the runtime should implement that specification. And that was the first and the default implementation is run C that was donated by Docker to the project. And then the image format basically defines how a build system should perform a build of an image container. And the output includes usually an image manifest with some metadata, file system serialization, an image configuration, and so on. We then arrived to Podman that is included in the latest Fedora 29 that is edemon-less API software for running, managing, and debugging OCI container and pods. As we said, it doesn't require edemon and so it leverages, of course, run C, that is default runtime for OCI containers for the open container initiative, and provides a Docker-like syntax for working with containers. So, it could be really easily to move from a Docker CLEE to a Podman CLEE. It provides remote management via API through bar link and also system D integration for managing container in system services. Podman was part of the project atomic project on DAB. And as we said, it implies and brings all the technology for managing OCI compatible containers and images. So, Podman could be used, for example, for pulling down an image from an image registry and running new containers or also managing the just downloaded container image. Going through the example, I prepared a set of slides, but of course, we will run the example live in a console on my Fedora laptop. The first one will be pulling a container image from the registry of Fedora project. We will try to pull the image from an HTTP container image for the ones who don't know what HTTP is, it's just a web server. So, basically, we'll pull down this image and then we'll try to inspect the image for details. So, we take, basically, the image from a container registry from Fedora from container registry and we search for F29 HTTP. As you can see, there is already the command predefined for pulling down the image and use with Podman. I already installed Podman, we just set DNF install Podman on my Fedora 29 and then I open my terminal and run the command. I already downloaded the image, so the process should be really, really fast as you can see, but of course, we can inspect the images actually already downloaded by launching Podman images. And this will, of course, display all the various images. If we here have multiple images, it will display all the images downloaded from the various registry. Of course, I can also download the images from Docker RAB or other registries. But going in deep, we can, for example, inspect metadata of our container image and going through, for example, the ID that represents the image on our system, but in the registry too, the name on the registry. We have, for example, the user ID that will be used for running that container image. Again, we have the exposed port and we will see in the networking part that actually this data could let us to expose a port from the container network directly to our laptop or server network. Again, we have a set of metadata and environment variables that actually could be modified and personalized for running in a different way our containers. As you can see, we have also a metadata representing the command that will be run into the container. And so the command will be user being run HTTPD. And finally, the various metadata and tag that identify and let the registry handle the image properly. Moving forward, we can then run our just downloaded image. So, transform that kind of container image in a running container. And so we just run Podman run HTTPD, for example. Okay. And as you can see, we instructed Podman to run a new container. In this case, the container is waiting for us as an action because it is in interactive mode. And so we just display the output that is starting actually, and is waiting for request. Of course, our telly's lock it because it is waiting for more output so we can passively talk with the control C, the running and then re execute it with him on mode. With this kind of option, so minus D with us to Podman to run it in background. And then we can inspect the running container with the Podman PS. As you can see, Podman report us that actually there is a running container on the online machine that has this kind of ID and this kind of image source. Okay. From this, we can again, of course, terminate or stop our container. And again, check your running containers in background. There is no running containers. As you can see, we just stopped it specifying the ID of the containers. Of course, we can access also to the previous run container. We have a list of terminated one that actually we may and we can remove from the history. Precifying, of course, the ID. Of course, we have multiple help and option that we may invoke in our command line. As you can see, there are a lot of options and value we can define for better running and personalizing the execution of our container. In our case, we just try this one. We give a name to our container to better identifying it during the next example. And so we run Podman run minus one's name, my HTTP service, minus D. So we want to execute it the ground. And then again, that image name, we just downloaded it as I show you the HTTP one. Okay. And as you can see here, we have this image running in background with the name of my HTTP service. We can then, going forward and inspecting not only the container image, but also the running container. So we can specify the name. So we launch the command Podman inspect my HTTP service. And we have a set of course of environment metadata that is the same of the image, but then we have also some information about the running container because we say that actually the kernel and also the container runtime will try to virtualize some data structure and some namespaces. In this case, we have a full network namespace isolated that actually assigned an IP address to our containers. So we have 10.88.0.44 has IP address for our container. We can so check if our container is running properly, basically contacting this IP address, then use cool. This is just a command line tool for grabbing web pages. We specify the part as we saw actually the exposed part in the image was 8080. Okay. And then we just run it. As you can see, this is an HTTP HTML page. It's just the hello page that the web server show when you connect with no index uploaded. Going going forward. And so jumping back for a moment on the slides, we will also run another another example where we show basically that a container by default is ephemeral. It means that actually it tells no data at all. We can of course put text and data on a file, on a file system on our container. But of course if we kill, if we stop the container, a start a new container from the same image, the content of that container will not appear anymore, will not be there anymore. So jumping back to the console, we have already a container running. Okay, and this is my HTTP service. So we can of course kill this container first. We remove it. So we clean up our environment, start from a fresh one. Okay, we have no container running. So we run just a container in the ground. Okay, we have our running container here. Then we use a special command called exact. They actually let us to go inside our container specifying a process to run. In our case, we want to run bin bash process. It's just a terminal in our container. And then create some data. My secret data, for example, and I want to place it in a file called my dot data. As you can see, now we have a file called my dot data, we can look out to see through that, through it. Okay, we exit from our container. We have our container running, of course, still running. We stop it. We then remove it has to be sure. And then we run it again with the same command. We launched it before. So we run a new container from the same image, HTTP. Again, we connect the running container opening a terminal. We are using the option minus T, why I because actually we want a terminal and we want it in an interactive mode. So we instruct our container engine to do this. We get the ID of the running container. And again, as you can see, there is no data anymore is for showing you that actually a container is a completely isolated and portable environment. But for image, of course, if we move it to runtime, and so actually we launch a new container, this container has to be ephemeral. So there is no way to save the data inside that container instead of, apart from mounting a new persistent storage to it. And we will see how to do this in the next sample. So for the moment, keep in mind that actually all the data and the edits and the modification personalization also the adds to the additional software you install to the container will not be saved if you actually stop or kill or remove the running containers because the container image is still the same is immutable, could not be edited. Of course, you can create new containers. And so you can define what are the base content that your container will have. So imagine you want to create an HTTP server with default web page on it, you can do it, you can build a new container starting from the one that we saw. And there is another tool that is called build data that is the main companion of Podman for building new containers. Again, it's OCI compliant so it respects the format specification from the open container initiative. It not requires a daemon and either a Docker socket and of course could generate and create new containers starting from a Docker file. Unfortunately, we will not go through build data in this session. I invite you to look through the classroom that should be scheduled next week for completely dedicated to this tool, to build up for knowing more. Going to the next sample, so going into isolation, we will go through another sample looking at the basic or simple modification after installation we can do on our container to see that actually we can destroy or make changes to our containers without affecting the main operating system. In this case I will run again the example on my laptop and I will show you that actually the various editing and personalization I will do in the container will not be reported or replicated in the operating system on my federal laptop. So moving back to the console, we can look through the running images. As you can see we have already an HTTBD running container. Again we use the common exec for executing a new terminal inside the container. We use another option, a new option that we never told before that this name minus U stands for a user and we specify the root user for having the rights to access to the container. If we didn't execute the command without the minus U root, you will see that actually we have a standard user that also has been defined in the image metadata and as you can see this is the ID 1001. So in our case we want to force Podman to get us access as root. We are inside the container as root so as a system administrator and then we can run for example software installation. We can install from Prepo Fedora IP mutils and PROC ESNG software. We are looking through the running processes and the IP address of course. As you can see we can just run the system package manager inside our container. It's just updating the repository available inside our container and this will not affect at all our running operating system. So in my case my Fedora 29 running on my laptop. The process should complete in a few seconds. Basically it will download the needed packages and let them run. After that we'll try also to make some crash and delete some stuff from the container and so broke things and seeing that actually this will not affect at all my Fedora 29 on my laptop. First of all try to ping for example Google.com. As you can see the ping is working properly and so we can reach from our container the outside network. This is done through the container runtime of course that will allow our containers to talk with the external networking. In our case we want to for example, remove the name server. And so we can ping again Google.com. Actually there is no DNS anymore so we cannot contact anymore any external address. There is no more name resolution service on this container. Again we can for example also look over the date and then for example move the ATC local time, ATC local time backup for example and then link a new time zone. User share zone info. America, New York, in ATC local time. As you can see we just changed the time zone in our running container. We exit from the container itself. We try to ping Google.com and as you can see all the DNS stuff and so names resolution is working properly and also for example the date showing the previous value. So basically the container itself is just an isolated environment that you can use and test your stuff, your application without affecting the container host. In our case the container host is a Fedora 29 my laptop and the container itself is again a Fedora 29 but an isolated environment. Moving back to the slides we can jump to the network inside. On the network inside we will see how to expose our services to be reachable basically by the word. We saw previously that we have metadata called the exposed port while inspecting our container image. This metadata actually and we will see a few moments again jumping back in the terminal. Inspecting the image called HTTP we have a set of values explaining the exposed port that could be used by the container runtime for exposing the service included in this container image to the outside. This means that actually as we saw before and as we can see again here on the running container there is we run again the podman inspector. There is a networking environment with an IP address. This IP address could be contacted directly from the inside of my machine but if I want to expose this service to the outside I have to map basically the port to my local system so to my local IP address for example to local host. Again so we can clean up the environments so we kill and move the running containers. We can then run again the HTTP server. Again check if it's running and if it's running correctly starting five seconds ago. Then we can inspect the running containers for the IP address. Here we go the IP address and again I can check it's working calling it through code but I can for example run it with a special option that is podman run minus d for putting in background minus p for mapping the port 8080 to the the port of my system 8080. So we map the same port of the container on my system 2 and we then specify the image so basically we are adding this option minus p with the with the port. We run it we can check that it's running okay and as you can see it appeared here that actually there is a port mapping this means that actually I can run or on local host and getting the same output. This means that actually this same port and IP address could be mapped to a port and IP address of my system of my laptop running Fedora 29 and letting for example friends colleagues or for example if I enable routing on my my home router also exposing this service outside on the okay. Then moving forward and go to the log inside we can of course go and troubleshoot our container so going through for example logs of our container we can leverage an option for a command for podman for showing logs. We will see for example that for the running container we can blocks and as you can see we are displaying all the output that the the process itself is pushing through our container engine so there are the standard messages of starting the web server where actually it's opening the port it's running to a system system service initialization then the command line option that was used and finally the various requests that are coming as you can see it just logged the the option and the command came from from core that actually we will run through. Of course this output could be the standard output that the process is printing and also the standard there so we also we can also go through any error that the process may encounter during the startup and during also the runtime itself. Going going in deep in the I'm seeing that actually there are some questions in the in the chat we will go through at the end of the of the presentation we are most almost there. Going through one of the last arguments so persistent storage we say that actually any modification any personalization we make to our container could be lost if we stop and run again another container from the same container image of course we can mount we can configure new running mount point for our container using persistent storage we will see during this example that we've created a directory on the on our container host in my case my laptop running fedora 29 and I then download the index page for example we try to to clone the fedora registry only in the main page not the wall registry of course we then set the right permission and we instruct podman to mount the correct directory inside the container. Of course we will monitor and we'll check the successful mount running again or maybe opening the page through our web browser. Moving back to the the console so we can of course clean up the environment you can see there is a running container again so here this container okay then we remove okay we have so a clean environment again we have our htpd image already downloaded we can then inspect again our container image so htpd and then grab my user as we saw before we have the user 1001 and we will use this information for setting the right ownership to the directory we will create later so we can then go through a podman run again htpd we run the container directly on with the console for accessing the terminal okay and we will look through the htpd configuration file looking for basically the document route that is the main directory where the web server will display your pages so we do grab for the document route and as you can see it's the standard one so par www html we can then go out check that the container is not running anymore we just terminated it with the control c and then we create a new par www html directory in my case I already created it that's it's okay and then go inside this directory I already downloaded the registry dot fedora project dot org but in our case we want to clean up so we delete it okay just nothing so we download the web page you pass some option for letting converting links and so on we take the web page directly from the registry it's just downloading as you can see it recreated the directory that I prepared before then set the right ownership for our container we'll set it irreversibly for all the sub directory of our just created one and so finally we can execute the main command we so we created the directory that will be serving the pages that our container will run through so we just run potman run again minus d take a name for example may my htp service times p 80 80 we expose the port and then we map a volume we map the source volume in opv bar www html into bar www html then we set the option double points upper z where actually we define and we instruct podman to set the right selenux leveling for the for the container otherwise the selenux that is the security labeling system for fedora sentons and red dot enterprise linux otherwise selenux will not allow the container to read and write site this directory and finally we specify the name of the container we can run it as you can see podman has executed container okay we can for example go through simple cure local cost 80 80 see if everything is working properly okay we can then connect to local cost 80 80 registry it's another project and as you can see we just replicated the main page with all the the basic stuff and we are serving it through our container exposing it actually on local host so directly on my machine moving forward we are almost there with the time we have also an example of containerized system services this is because for showing you that actually running new containers has the true podman could be also integrated directly in system d system d is the main system manager of fedora and central seven well seven and of course you what you are watching in this slide is just an example of the unit file that you can place in 80 c system d system with the name of your service and instruct system d to interact with podman and start and stop your container this basically will allow you to run something like this my htdp service i'm not wrong as you can see the service is not active just clean up the environment before going and testing it so podman here and remove container okay and so again the service is that we can start it and then watch over the status as you can see the container and the service is running we can then move back to the web page that we are just cloned we are cloned and as you can see it's working properly so it's continuous working of course you can explore and get in get more information looking through this url i will send to fedora magazine and into fedora guys all all these stuff so the slide will be accessible in this link you will find a real example showing how to set up a lamp stack so a linux apache my my sql and php for hosting for example wordpress to to container as system services so you will have a system service for htdpd and a system service for maria db my sql database then we have links and documentation something useful before going to uh question i want to thanks the open shift view team trodman karty and tomas cameron and william harry that give me permission for reusing their stuff and and slides and all the fedora team of course so thank you