 Good morning everyone, Wednesday we have today, oh Jafar just joined, so hi Jafar, it's just in time to have a coffee with us, just enjoying my espresso, it's very very hot here in London, I didn't know about you guys but we are about 28, in my apartment even more. So today we have for our open shift coffee break appointment, we have Gianni Salignetti and of course our Jafar as co-host, how are you guys doing, everything okay for today? Yeah that's okay, I have some noise in the house because I'm moving, I just moved to a new house and I have people working inside and assembling my kitchen right now, so there is a bit of noise but you know moving out, but anyway everything is fine, so how are you? Very good, apart from the heat that's the plague here in the UK as well, so everything is okay, guys I know that you have a session packed of content for us and so today you will be talking about Portman and I just wanted to ask you, I think you also recently came up with a book, what was it? Exactly, exactly, we did a book, it's called Portman for DevOps and it's been released by Pack Publishing more or less a month ago, a couple of months ago in May and it was written by me and my super colleague Alessandro Ricchiello, he should have been here today but unfortunately he had a small issue so he could not join us but he's with us with the force, so we wrote this book with the purpose of providing a comprehensive book for Portman not just to run basic stuff but also to understand and to manage complex scenarios like for example security features, complex builds and integration with system D and Kubernetes and so on and obviously how to manage a transition from talker to Portman, so the book is going quite well by the way, it's sounding quite well, we're getting quite good feedbacks by people so we are very happy and by the way I'd like to thank... And the book is called Portman for DevOps, right? Portman for DevOps, I'll send the link later for those that are interested. I also have one image with all the links in the presentation later and by the way I'd like to thank also other colleagues who helped us to provide technical review and who are Nicola Amato, Marco Fagotto and Pierluigi Rossi, who are taking our viewers, it's been some kind of Italian job because most of us are Italian. We had a very nice forward by Brent Bould, Brent Bould is one of the main developers, he's one of the engineers working on Portman along with Dan Walsh and by the way Dan Walsh gave us a great hand in writing this book and he also getting out with another book about Portman so we should check it out when it will be available. Dan Walsh is the father of Portman and so he's getting out with a book so we're going to have two books about Portman in the future. Tell us a little bit more about yourself, what do you do, what's your background if you wish and what do you do for Red Hat? Yeah, currently I am a solution architect for Red Hat and in the past I'm working especially I'm more focused on cloud infrastructure stuff but also you have some DevOps background and in the past I've been an instructor so I've worked in training for Red Hat for almost five years so this kind of experience teaching stuff to people has been a very huge part of my life and it still is so I really love to share, to try to explain in a simple way things, basic or complex things. Which is what we hope you're going to be doing for us today and take us in the world of Portman so without further ado because I'm conscious that we also have questions to answer and hopefully we'll make it interactive. You prepared the presentation for us today, right? Yeah, exactly, so I prepared this presentation along with Alessandro and let me share with you, it was nice to call it a Coffee with Portman. So let me bring it to the streaming and there you are, take it away. So I call it a Coffee with Portman because it's a derpy shift coffee break so it was nice to do it. So as you can see we have both names Alessandro, Ikelo and me. We have prepared this presentation together, we wanted to just grab the most important things from the book and not to repeat ourselves about basic content topics because I think that nowadays most of the people, most of our listeners, of our audience is already aware of what containers are and how to start a basic container but we wanted to focus more on details today. So the container history, how we get to Portman from the history of the very first containers to Docker to Portman and architecture a deep dive of Portman. So how it works and how should I use Portman? Routeless containers. So let's talk about how we can run ruthless containers in a more secure way with Portman and let's talk about also about its companion tools. So build them to a very specific tool to build images and scopio, which is a dedicated tool to manage images and registries. And finally, we want to introduce some more specific integrations. For example, how to integrate Portman with system D. So how can we start our containers as services inside a node? And how can we relate to previous technologies like Docker compose? Many people ask us, OK, Portman is great. It's very nice, but I have my compose files. Can I use them? OK, here we have the answers here. And finally, how to integrate with Kubernetes. So Portman is a native integration support for Kubernetes. So it can generate Kubernetes manifest like pods and play this manifest. So we can provide to Portman a standard port manifest, to Kubernetes port manifest, and then Portman run it for us. And this is a really a game changer for me. We'll discuss this later. And finally, we will provide some hints about interactive labs for people who want to try it out and some partial readings. By the way, it's our book. So let's go. Let's go. Let's start with the container evolution. So we started everything at the dawn of the times. There was the CH route. So CH route in Unix was the first way to isolate a process, isolate in terms of file system access especially. Then the concept of CH route evolved over the time. And other operating systems like FreeBSD or Solaris provided other ways to isolate processes like FreeBSD, J, Solaris, Zones. The concept was quite similar. I want to isolate a process inside some kind of sandbox and have it only in there and not allow the process to escape its sandbox. This approach, obviously, reached Linux too. And it became available in Linux when the concept of Nayspace in the Linux kernel was introduced. The Nayspace and kernels are a feature that allows us to isolate the specific subsystem of kernels and create clones. So run processes in clones of a specific subsystem. For example, we can create a network Nayspace, for example, and have a process run in a separated network stack. This is a great feature that allows us to trick a process and have it believe that it's running on a native host while it's running only in an isolated portion of the operating system. The first really usable implementation of containers leveraging Nayspaces and not only Nayspaces. Also, C groups, for example, was LXC. LXC is a container on time, which was implemented in the first years of 2008. And it was a very important step forward in container adoption. And the next step was the usage of an orchestrated version of LXC. So Cread published OpenShift 2 in around 2011. And OpenShift 2 was using the basics of LXC containers. Then Docker came out. When Docker came out, it was a real game changer because Docker provided a way to use containers in a very simple way with an approach that was very familiar to developers. So what happened? Developers started to talk about containers, started using containers for their projects. And this adoption from developers was really the key to the evolution. This drove to other projects of orchestration, like, for example, Kubernetes. Kubernetes is still our main thing. We are talking about Kubernetes every day. And Kubernetes was born as an orchestration tool for containers running on top of Docker. Then it becomes something more about and provided more and more features, more and more API. But it was started as a project as an orchestration tool. And then, obviously, other projects like OpenShift 3 were re-based on Kubernetes. So OpenShift 3 was a complete rewrite of code and completely different from OpenShift 2. For example, OpenShift 2 was written in Ruby, and OpenShift 3 was written in Go. And it was based on Kubernetes technology. So we know the story of OpenShifter right now is important. And it's continued to work on top of Kubernetes. Other things came out later, because Red Hat acquired CoroS, a company which provided the great sprint of evolution for container ecosystem. CoroS developed many solutions like RKT, RKT Racket, which is a container runtime, which was a container runtime very interesting alternative to Docker. They developed Quay, and they developed the CoroS operating system. When Red Hat acquired CoroS, it adopted those technologies. And those technologies became a key to Red Hat evolution in hybrid cloud and container world. So one of the things that came out after this evolution was Podman. Podman, we would see today, is a demon-less container engine, which has a lot in common with Docker, but evolves from the Docker paradigm of demon-based container engine. So becoming a demon-less container engine based on basically on a process, on an approach on ForkExec. Very simple to use. It allows the usage of containers in a rootless context. And it's very easy to implement and to adopt in complex scenarios. Along with Podman, we saw other tools coming out, like, for example, Buildup and Cryo. And Scopeo, we would see them later. So that's a story of containers. And one last thing. During this evolution process, we saw a standardization of container technology. So Docker gave some of this code to the OCI foundation. The OCI foundation was a foundation funded also by Docker, by the way. It's contributed by many vendors, also Reddit, and provides some standards, some standard specification to container. So right now, we are in a world that is more standardized than before, than in the beginning. And we have different specifications from OCI, from runtimes, images, registries. And Podman, obviously, implements those specifications. So we have a full standardized container ranging. Some questions? There didn't seem to be any question. Let's fast forward to today and see what have we managed to do up to today, and so that we can see what the state of the art is. Yeah, I would say. So yeah, Jenny, thanks. It's always good to have those milestones reminded. But I think what would be interesting is, because just be aware that we have 40 minutes left, and it's going to pass very quickly. What I would say is the main, so I know one of the driving thoughts behind Podman was separation of concerns compared to Docker, for example. Yeah, we see. Can you elaborate a bit on that, like explain how they dissected all the tooling we needed in specialized ones, and then you can go a bit deeper into the details of what tools does what, et cetera? Thank you, Jafar. We're just going to take a look at the main differences right now. So we will focus on these three main topics, Podman, Build and Scopeon, so some examples of them. And obviously, the main focus will be on Podman right now. So the transition from Podman to Docker was designed first to allow people using Docker to have a complete CLI compatibility with Docker. So people running Docker can start using Podman easily. And fully a CLI compliance, as I said before. The main differences between the two container engines, they are engines, not runtime, it's under the hood. And we will see what's the main differentiator between them. So first of all, let's start with Docker. Docker is demon-based. So with Docker, we have a demon running in background, a system started by SystemD that has the role of supervising the execution of all the containers and not only the containers as a role of orchestrating also all the other resources, like network or volumes, for example. And also provides APIs. So the RD one in Docker provides REST APIs. And the Docker CLI is just a client which interacts with the REST APIs of Docker. We can expose those APIs using IUNIX socket, which is the default, or using network connection. So we have a client server architecture, let's see in this next slide, which is basically very simple, but it's the coupled. In Portman, we have a completely different approach. We have a single object. We have a single container engine, which is not a demon. It's a single goal binary that does all the job. So we can run Portman without any need of demons in the background. And that's because Portman uses a fork-exec model, fork-exec approach. So it forks all the process needed to run the container. So the come on and the container run in a couple of slides. And also is implemented using a standard library, which is shared by other tools like, for example, cryo or like beta. So that's very important because we have a reuse of code. Also, Portman introduced before Docker the feature of fruitless containers. Later, Docker joined us in this feature and also introduced fruitless containers. But in the beginning, when Portman came out, it was a very interesting and new feature. The ability of standard users to run containers without root privileges. How? By mapping their user IDs to user IDs inside the container, which have higher privileges. And this is achieved by using main spaces, user main spaces. And finally, Portman supports, as I said before, the OCI standard run times, image and storage specifications. Just like Docker, Docker also supports the standards. So this is a high overview. But I'd like to share with you some more details of how Portman runs its ports. So the basic thing is quite identical between how we run a container on Docker on Portman. As you can see, the only thing that changes in this example is just the name of the command, Docker or Portman. We have Docker run or Portman run. All the arguments and parameters are identical. And this is done, it was done on purpose because we wanted people adopting Portman from Docker being feeling at home, feeling just like a trapping, just as a trapping replacement. Many people create aliases, like shut alias from Docker to Portman or use the Portman Docker package to create a more complex alias and continue running the Docker command while running Portman under the hood. Maybe because they created their own pipelines and they don't want to change everything in their pipelines and so just keep using the Docker command but indeed they are using Portman. Other people are changing their pipelines or scripts and start using Portman command. But as you see, it's identical. What is changing is how Portman behaves under the hood. So let's see in a small architectural deep dive. As we said, we implement all the standards. Not only the OCI standards, we also implement natural standards like the CNI network, container network interface which is a standard Kubernetes interface for networking. Nowadays we also introduce a new network stack which is written in Rust, which is called network. And so we are very standardized with the community. And Portman, as we said, is only a container engine which means that it has some kind of container runtime under the hood which is responsible to run the container. The same thing happens in Docker. Docker is a container engine and uses a container runtime under the hood. The default container runtime for Portman and Docker is runC, runContainer. RunC, by the way, was a project started by Docker and then donated to the community, to the OCI-related community. And when the technology evolves and when the ecosystem of containers grows, other container runtimes appears. So we have, for example, now the CRUN container runtime which is a lightweight, faster container runtime written in C, not in Go. And we have other very interesting runtimes like Yuuki, with an Rust. So Portman allows us, we will see it later, to swap container runtimes, to change the container runtime under the hood. What happens when we start a container? As we say, Portman uses a fork-exec approach. So it means that when we run the Portman current, we are invoking the container ranging. The container ranging creates, prepares the environment for us. So it manages networks, volumes, and so on. And then starts a process which is called Kanmon. Kanmon stands for container monitor and is a process which runs detached from the main Portman process. And that's the role of monitoring the status of the container. So when we run the Portman command, the Portman command exits with zero or non-zero code while the Kanmon command process continues to run and continues to provide an interface to interact with the container. So if we restart the new Portman command on top of the container, we interact with the Kanmon process running. The Kanmon process, still following the fork-exec model, runs the container runtime, which is run C or C run by default. And the container runtimes is responsible to execute at very low level the container. So it's responsible to manage the execution of the isolated process. It creates the main spaces, the kernel main spaces. It creates the process isolation in terms of resources, CPU and memory using C groups. C groups are countergroups for kernel and a very nice way to isolate processes, to isolate resources for processes. So we can allow a specific amount of memory or CPU to a process. And the container runtime does this job for us in a transparent way. And we have the container up running, the container runtime exits, and the container is in the process and the container is still monitored by the Kanmon process. As we said, the runtimes can change. We have many runtimes which can be adopted. We support with Portman, the OCI runtime compliance. So we have run C, the runtime that is also used by talker and C run, as we said before. And other runtimes like Yo-Key or for example, Cata containers. Cata containers are not very interesting runtime for executing the containers in a virtualized sandbox. So you can try all these other runtimes by just whopping the runtime option in the Portman command. So if you want to change your runtime, you can pass the dash-dash runtime option, specify the runtime path to the binary and swap your runtime. And if you don't want to do it every time you run a container, you can specify the run, the default runtime in the Portman configuration file. Any questions? Yeah, Jenny, just one quick question. Are there any known limitations for C run compared to run C or are they at par in terms of features today? No, they are quite similar. There are no known limitations. By the way, C run was introduced to overcome run C limitation in the past. Run C did not support C groups D2. C groups D2 introduced a radical change in the hierarchy tree of the C groups of resources. So this change was not supported by run C in the beginning. So we introduced the C run from scratch to support C groups D2. And C run is even faster in terms of benchmark than run C, it's really faster and a more lightweight. So it's safe for use in production or is it still experiment like in... It is safe for using production. And by the way, with REL9, it's still not default but it's supported in REL9 as a runtime for Portman. So it was a review if I remember well in REL8 and REL9 but not the default, still not the default. But it's safe for production use, absolutely. We actually have a question from the chat. RGM is asking what would be the ideal default container runtime if we use Portman for production? And I suppose the answer would be different whether we're talking about Portman standalone or if we're talking about a container runtime in OpenShift. Yeah, it's different because when you have a container runtime running on the OpenShift we are not using Portman but we are using Cryo. Cryo is another container runtime developed especially for Kubernetes, it's a container engine indeed and uses run C container runtime. It still uses run C container runtime. And I think as you get down to the presentation we'll see the benefits of using Portman during your development, the conjunction with all the development tools and testing and all that. So I guess it will help when we get to that point to understand the differences. Yeah, absolutely. By the way, when we have to orchestrate many containers a complex scenario for containers we usually have need an orchestration tool and that's why we go for Kubernetes or OpenShift when we need the orchestration plus many other features for developers and a platform for developers. Portman is great when we want to run container in a single host or when we want to integrate with our development scenario. So Portman is great for DevOps context for developers is great for the integration with pipelines for example. And in that case I would go for C run with no problem but if developers, if teams want to adopt run C because they don't need to the feature of C run they can go for run C. Also by the way, run C now supports C groups C2 they introduce support. So they're completely swappable. So they can change between one and the other easily. So speaking of other interesting and important features of Portman Portman introduced the concept of Port. That's why we have the name Portman. Portman stands for Port Manager. So it's a container engine but it does more than running C port containers. We can run containers inside ports. What is a port? The concept of port is a concept that was introduced with Kubernetes and if you look at the official Kubernetes documentation they say that a port is a minimal execution unit inside Kubernetes. What does it means? It means that a pod is a sandbox that can run one or more containers inside that sandbox inside the very same host. So we cannot have a pod splitted across two hosts running a container in one host and a container in another host. The pod is a minimal execution unit inside one single host. And what does it means under the hood to run a pod? It means that containers inside the pod share namespaces across them. Which namespaces? They share the network namespace. They share the inter-process communication IPC namespace and optionally they can share the process namespace which means that containers inside the same pod can communicate faster and easily sharing the same resources, the same network resources. Do we, maybe we don't always need to share network namespaces across containers. So when we do not need that, we do not need a pod. But a pod can be a very useful tool when we want to keep our containers together running together on top of the same host. For example, for latency features to provide low latency of communication between pods between containers inside the same pod. For example, let's see this example here. We created a WordPress pod and we put together inside the same pod a MySQL and a WordPress container running inside the same object. So we can migrate this pod in different nodes and be sure that the two containers will always be together. We used this approach in OpenShift to put together containers in the same pod that are expected to strictly communicate with each other. For example, reverse proxies on front of a container. That's what happens when we run service meshes inside a Kubernetes cluster. And we have in that case a reverse proxy inside in front of every container. And this reverse proxy, it's called handboy, handboy for example, for Istio is running inside the same pod. And it shares the natural space with the container. It means that the two containers will have faster communication and will share the same network. Jamie, one quick question about this. So are you going to talk about the networking and how pods can communicate, et cetera, afterwards? Like if we have explaining the differences between containers who are within the same pod, how do they communicate? And if I split those two containers in two different pods and I want them to communicate what are the different pod man networking options and how we can set them up and how they can communicate, et cetera. And like, you know, just to lay some of the basic stuff. Yeah, thank you. Very good question. Let's start from the end. So when we have two containers, two simple containers or two pods communicating with each other, we know that each container has a separate network namespace. So they have to like separate hosts and they need to communicate with each other. So we have to provide some kind of software defined network to add and communicate. We have to provide layer two and layer three communication across them. The most simple way to do that is creating a Linux bridge and attach those containers to the Linux bridge. And that's exactly what Docker does since the beginning and what pod man does by default. So using a Linux bridge, we attach those containers and those containers can communicate at layer two and are configured on the same layer three IP network. And they're routed on the same network. So we have the routing codes, we have the layer two communication. So those containers can talk to each other in the same network, in the same layer three network, IP network. That's very simple. When we have two containers inside the same pod, those two containers have the very same IP address. So they are on the same network stack. It's just like having two processes running on the same machine and communicating in the same machine. Maybe on the loopback address on one to 7021. That's the main difference. As I said before, you do not always need to have containers sharing the same network. It's only needed when you don't want to use the software defined network or you don't want to bypass the software defined network and you just want to have a strict communication between process. By the way, the bridge networking is only the beginning with Kubernetes, many software defined network solution were introduced. Kubernetes also standardized how networking should be done with a C and I container network interface. So what happens? All the network implementation for Kubernetes nowadays should be compliant with that interface, should implement that interface. So we have very basic solutions like for example, Flannel on Kubernetes up to the most complex and obviously a feature rich solutions like for example, all the anchor Kubernetes which is by the way used in Opel Shift. Opel Shift uses two possible approaches. The Opel Shift SDN, which is based on OpenVswitch which is a virtual switch and evolved virtual switch which was created to overcome the limitations of the Linux bridges and to create a decouplement between a control plane and data plane which cannot be done with simple Linux bridges. And the evolution of OpenVswitch was OVN OpenVirtual Network, a complete software defined network which provides full control in the coupling on layer two and layer three, not just on layer two. So OVN was also introduced in Kubernetes was implemented also for Kubernetes and Opel Shift adopted OVN as a new default as a new standard for software-defended networking since it provides a lot of features that allows us to create, to manage complex scenarios and to manage easily the isolations between containers and night spaces in Kubernetes and Opel Shift obviously. So... Yeah, so Jen, just for only 15 minutes left. I don't know how far we are in the prison. Yeah, I'd like to move a bit faster if you agree. Yeah, maybe there was a couple of things that we touched upon, like the benefits of running rootless containers from security standpoint to be jumped to that and that move a little bit faster. I mean, obviously, like I said, we had too much material for today, so we need to move a little bit faster, yes. Yeah, absolutely. So I'd like to introduce now concept of rootless containers. So let's go right straight to it. So what are the advantages of rootless containers? We run containers without root privileges. So we don't need to be root, we don't need to elevate our privileges to run those containers. And we provide a better isolation of processes. So a process inside a container which escapes the bundle of container in some way, maybe because of a security issue, maybe because of zero-day security issue, can also escalate its privileges to become root inside the machine. So we restrict the surface attack on the machine. And also we allow many users to send the same host to run containers without root privileges. And also we can provide a more secure way to manage our builds. This is very important. We can run big pipelines as rootless containers. And we can also run our builds, like for example, the build a process or podman build inside a container. So nested containers in a rootless scenario, which is very secure way to manage our builds without the risk of any kind of escalation. Speaking of builds, I'd like to also introduce you the concept of Builda. Builda is a native tool for managing builds. And by the way, podman manages builds using the reusing Builda libraries, Builda code. So Builda also allows to make some more complex builds. We can use the basic way. So run Builda to build from Docker files as we know, or we can run more complex builds using Builda native comments. If you agree, I have a very quick example to show about this. So this is for example, an example to run Builda native comments. We can see that we have Builda specific comments that reflect the Docker file instructions. And so we can create a container from starting from an image, run our custom comments, and then finally commit our container and create a final running container. This is great and it also, because it allows us to script and automate and put it inside a pipeline. So we have more control than the basic Docker file. We know that the Docker file is a sequence of instructions. You cannot stop, post that sequence, do something else and then come back. And this is a huge imitation. With Builda, you can do the very same things of a Docker file, but you can also stop somewhere, do something else and then come back. And for example, you can run comments inside Builda, then you can config and commit your image in the end. And this is a great game changer. And also you can do builds from scratch. It's that you do not need to run a build from a base image. You can start from a scratch image, so from zero with an empty file system and create your build from scratch, completely from scratch, create your file system and build your custom image. This allows us to create the minimal images, even distress images, which are great for microservices. We do not always need the huge Fedora image, the huge Debian image or the even smaller UBI smaller, or UBI micro image. Sometimes we really need that very essential, minimal, minimal image, which should be pretty maybe, but it's big. And you can do that in this way. The companion tool is Scopeo. Scopeo allows us to bring our images somewhere to analyze our images, to push and pull our images from registries and to manage our registries remotely. Scopeo, for example, can be used to inspect a remote image without pulling it. So we can pull the image from with Podman locally and inspect the content of the image, but with Scopeo we do not need to pull it. We can remotely inspect the image and see all the repository features, all its tags and the image tags, and then just pull what we need if it's compliant with our needs. And also we can use Scopeo to copy each from one registry to another. So we can do a remote copy between registries without pulling the image locally and pushing the image again remotely. And this is another great feature, or we can use Scopeo to remotely delete images. And finally, I want to talk about the integration with system D composing Kubernetes. So we can run our containers inside the Podman and allow them to be managed by system D. System D can manage the execution of our Podman containers, since as we said before, we have a fork-exec approach. So we can create containers of pods, generate our system D configuration files, save this configuration inside a service, a unit file, a system D unit file, and then it's executed at boot of the system by system D. So in that case, the execution is completely transparent for the user who doesn't know if the process, if the service is a native process or a container. An example that uses this approach with system D and Podman is OpenStack. Red Hat OpenStack platform exactly uses this approach. So every service in Red Hat OpenStack platform is managed in this way. We have Podman in OpenStack running those containers and all the containers are orchestrated with system D. So we have system D units for every OpenStack services. And it works very well, it works great, and it provided even a better way to manage OpenStack updates, which you maybe know, you probably know, it's quite complex. So what happens to people who need to adopt Podman, but they who still want to use their old compose files to run a complete stack, they can still use them. They can use the old Docker compose codeman command, so they can install on Fedora the Docker compose command and edit talk with Podman REST APIs. So Podman doesn't use REST APIs by default, but if we have a client which needs to interact with REST APIs, we can start a REST API service using a system D socket unit. So we can start the Podman socket service unit, sorry, and expose REST APIs. And the Podman REST APIs are completely compatible with Docker REST APIs. So we can run the Docker compose commands on top of Podman. If we do not want to use the Docker compose command anymore, we have another alternative, which is called Podman compose. It's very similar tool. They are both written in Python and they are very similar in their behavior. So when using Podman compose, we have a simple way, we do not need to use the Podman socket because Podman compose has an active interaction with Podman. One very important note, Docker compose and Podman compose are not supported on REST. They are supported on Fedora. Any questions? And what is our recommendation? Well, your recommendation what they should be moving to instead of being stuck with Docker compose. That's a very good point. If you want to move on and use something which is more standardized on REST, we can use Kubernetes manifest. We can use pods. So we can generate Podmanifest using the Podman generate cube. So run our containers or pods, generate our manifest and then just play those manifest with the common Podman play cube. In our book, we have covered many examples for this because we really believe that this is the new way to manage complex tech. So we covered basic comments using one single pod with many containers and more complex scenarios using many pods interacting together on a dedicated network, which becomes a Kubernetes service and also with dedicated volumes. So we can create more complex techs. This comment not only manages pods, but also services and volumes. So you can create, for example, speaking of WordPress, a complex tech which has different pods for Podman, sorry for WordPress or MySQL and dedicated volumes or talking on a dedicated service network on a dedicated subnet provided by Podman. And this is transparent, very easy to do and you can version all your files or your configuration YAMLs and then quickly start a complex tech with Podman play cube. This is our way to standardize, in my opinion. So people who want to keep using the composed files are free to do that, but if you need something that is supported on rail and you need something that is more standardized with a new Kubernetes standard, which is some kind of an official standard nowadays, it's better to stick with this approach. And so does it require Kubernetes on the machine or it's just Emily? Absolutely not, absolutely not. It's just manifest to create the container as the networking, et cetera. Because Podman itself can run pods based on those definitions. Exactly, if you have a couple of minutes, I'd like to show you a very quick example just for this, just like we. Of course. All right, so I share this window here. Okay, we have, there we go. So by the way, I'm running on top of Fedora CoreOS built from machine, I'd say, why not using CoreOS? Fedora CoreOS is a Fedora distribution optimized for container workers, which is, by the way, distribution under the community version of OpenShift, OKD. OKD uses Fedora CoreOS as the basic operating system. We can use Fedora CoreOS standalone servers to to run our workloads in containers. That's what I do usually. And I love its immutable approach. I love immutable operating systems. Here, I have, I already have Podman installed and everything else. And I just want to run this example. So I'm gonna, I use a column paste to be faster. I'm gonna create a couple of volumes. They already exist because I have, I have already tested this example. And I'm gonna create an empty pod called WordPress Pod. This is gonna be a pod with two containers. Oh, sorry, it is Podman Pod around WordPress. Let's do it from scratch. If I do Podman Pod, yes, I see this empty object, which has nothing inside right now. OK, so now I create my containers inside my pod. So I run, I create a container and I specify that it must belong to the WordPress Pod with the dash dash pod option. This is the MySQL container. And now I created the WordPress container in a very same way. Now I can start my pod, Podman Pod start command. So I can see now that my pod is running and I can see my two containers running. And along with the two containers, I see this object, which is called Podman Pose, which is some kind of initiator container for the pod, which creates the sandbox and shares the network and spaces and APC spaces with the other two. So I have my pod up running and now I just want to generate a YAML file, Kubernetes Pod YAML. I can do it with this very simple command, Podman GenerateCube. I give the name of the pod and I specify the name of the output five. So, sorry, if I exist, I want to create it from scratch. And here we go. So now we have this YAML file, which is a standard Kubernetes Pod. I automatically wrote all these annotations. I wrote down all the variables I specified and in the command line are inside my pod environment. And they have two containers, one containers, one container for WordPress and one container for the MySQL container. So I have everything I need. So now I can try to stop the previous container, the previous pod and also remove it. Sorry, there is a title. Podman, sorry, it's not a container. Sorry, there was a title here. And Podman, pod stop, yes. And Podman, pod, RAM, WordPress, they will. So now I have completely removed the pod. There's nothing running. And I can run, podman, play, cube, WordPress, pod YAML. With the simple command, I recreate all the stack immediately. Now it's trying to pull the new image again. Probably it found the latest MySQL image, which is more recent. Obviously, this happens only the first time. And here we go. It created my pod and created the two containers. So I have the same result here. Podman, pod, yes. I have my pod running. On my PS, I have my containers running. This is a best scenario with just one pod, two containers in one pod, but we can create more complex and realistic scenarios like two pods connected to a dedicated network with a service. So it's great, it's fast, and it's very easy to use. And I personally prefer it to the standard target component approach. This is something personal, but I usually manage all my complex stacks in this way. So back to my presentation to the final wrap up. I'd like to suggest a couple of things. People who want to test podman can do it in many ways, just can install the podman binary. It's provided in almost every distribution. It's also available as a client with a podman machine on macOS. So you can run podman as a client and interact with a dedicated machine inside macOS. It's available in the Windows subsystem for Linux. And if you don't want any way to do it on your machine, you don't have a machine to test it, you cannot install it for some reason, you can use our interactive labs. Our interactive labs are available at developers fredhat.com, and we have some labs to learn how to manage containers, to get started, to manage container builds and registers. And we can do that, they are all based on podman. And we can do it then interactively without touching your laptop or machines. So they are great to experiment. We are safe to do anything we want to do there. And also we have other resources. So you can start by the podman official documentation. For people who want to understand the code, see what happens, we have the GitHub repository, which also provides extra documentation and the podman blog, which is a very useful resource for updates and interesting tasks. Different engineers from the connect and workshop, BrandFold, for example, they write there and we have the communities. For example, we have the Discord community for podman. So people who want to try podman for the first time, want to learn, can join the Discord community, the podman Discord community and learn, ask questions and believe me, they quickly answer to anything you ask. I, me too, when I wrote the book, I had some, in a couple of times, I had some issues with the complex things I was experimenting and I was at every time on the community, on the Discord chat. So it's very active, it's very reactive and be sure to join. And finally, if you want, you can grab our book for DevOps and this is the cover. We have chosen some dolphins running the pod of dolphins and podman for DevOps is available on Amazon by publishing as available as an ebook. It is available as a printed version. And you can also check out our GitHub repository for the book, which is the link you can see here. GitHub.com by publishing podman for DevOps, which provides some examples which are available, obviously inside the book. We provided some example on GitHub for people who want to try them out. They are obviously available for anyone else, for example. So be sure to check it out and also provide feedbacks. So thank you so much for time. Thank you, Gianni, for the very informative presentation. And maybe, in bad way, we do also talk about podman when we talk about age capabilities together with the HackFest team. So this comes back in different shapes and forms during our presentations. I'm afraid that's all we have time for today. Giafara, any final comments or any questions? No, I think it was great to have those insights. So thank you, Gianni. I will have a look at the GitHub examples as well, and I wish I could have the book for free. Thank you so much. By the way, Andrea... Yes, yes, no worries, we'll take care of that. Just wanted to wrap up and remind everybody that the next appointment with Openshift Coffee Break is next Wednesday, the 20th of July, with our Database as a Service series. We'll be talking about Co-CoachDB with the Red Hat Open Database Access. And thank you very much, everybody, and goodbye. Thank you so much for watching me for the time, and thank you. Goodbye. Bye-bye, have a good day. Have a good day.