 A lot of people are on a different point in time where they are in their container journey, and that's why we have the awesome talk ahead of us. So let's welcome Stefan and Valentin, who are going to talk about containers, two pods to Kubernetes, Portman Desktop. Enjoy. Hello, everyone. Thanks for having us and thanks René for the introduction. So as René mentioned, we are going to speak about containers and container tools as well, that helps you in your journey to go to Kubernetes. So I'm going to share my screen now, to share the slide. Should be able to see my screen. And is it working? I think. Yes, it's working. Perfect. So to introduce ourselves, maybe Valentin, you want to introduce yourself first. Thanks, Stefan. Hi, everybody. I'm Valentin. I'm working in the Container Runtimes team. So in the Container Runtimes team, we're pretty much between the kernel and Kubernetes or also now Portman Desktop. So we take care of all the low-level container technologies. We implement, maintain tools such as Portman, Scopio, and Buildup. Off to you, Stefan. Yes, thanks. And I'm Steven Lema, I'm Product Manager working on developer tools. So in the developer tools business unit, we are trying to build a tool that helps the developers to work with containers and work with Kubernetes and OpenShift in a smoother way. So as an agenda for today, we will start introducing a little bit Portman for those who are not familiar with the tool. And then we will discuss a little bit about how developers can go more easily from containers to Kubernetes with Portman Desktop. We will spend a lot of time in a demo and we will introduce the future plans that we have for the tool. And with this, I'm ending over to Valentin to speak a little bit about Portman. Thanks, Stefan. Already looking forward to your demo. They're always awesome. You're really a pirate of live demos. I'm very much looking forward to it. So before introducing Portman, I would like to dive a little bit into the containers philosophy that we have at Red Hat. I don't want to speak for the entire company only for our team. So our approach is really to provide small solutions that allow us to innovate in functionality, security, stability, and all at different paces and speeds. As I mentioned before, there is not only Portman, but also Scopio, which is the tool, like a Swiss Army knife for distributing, copying and manipulating container images. Builder, as the name may suggest, is our Swiss Army knife for building container images. So all our tools share the same underlying libraries for managing container storage, for managing container images. There's also Cryo, which is a tool also developed within Red Hat with the community together for running powering containers and container images under Kubernetes. So really we try to have a more Swiss Army knife approach rather than a one-size-fits-all solution. But arguably, Portman is the biggest of these tools. It does a little bit of everything. Next slide, please, because Portman aims at being a drop-in replacement for Docker. So it is what we call a container engine. Some call it a container runtime, but the terminology of runtime is really overloaded. There's something else actually called a container runtime. So what we define under a container engine is that it manages images, it manages containers, so-called pods, which is a group of one or more containers sharing certain resources. It manages volumes, networks, all the things we need to run and manage the life cycle of containers. So Portman stands short for pod manager and one of the initial goals of Portman was to have a drop-in replacement for Docker. So many of the people in our team have been working, contributing or have been maintainers at Docker initially, but early on there were a couple of issues that we saw in the architecture that I'm gonna talk about in a minute or so, that we didn't like, which we made for instance, and in particular, rootless support much harder than necessary. So what I mean by rootless is that we can run containers without being root on the system. So this is not only great for multi-tenant systems, but also it increases a lot the security. And this is something that Portman supported since version one and actually before. So we have a clear focus on security. And I think Portman is a great tool for developers. This is something that Stefan will focus on, but also for CIS admins. So maybe you don't see it at the outside, but I feel like an old graveyard Linux admin or old school Linux developer. So we got you covered there, but if you're on the Mac or Windows, then Portman desktop got you covered there as well. One more click please. Yes, awesome. Thank you, Stefan. So talking about the architecture, I was elaborating or mentioned just a moment ago where I said early on we saw some issues in the architecture of Docker, which is mainly shown here. So on the left-hand side, you see when you use the Docker client in a terminal dependent, whether you run it on the Mac Windows or Linux in almost 100% of all cases, you don't need to type sudo. So you can just run it as an ordinary user while the client is talking to what is called the Docker demon, which is a server running in a system D unit in Linux. This runs this route for historic reasons. Back when Docker started, many features of the kernel, such as layered file systems, but also networking really required room privileges. So it was just more convenient to put, the Docker client and the Docker group, which grants access to the Docker socket to talk to the Docker demon, which in turn runs this route. But quite often when I stay, actually when you run Docker, you run all your containers as a root user. And sometimes people say, well, Tina, I don't need to type sudo and this is fine. The client and the server are in the same group, the Docker group. So this allows the client to access the Docker socket, which in turn talks to the demon, which in turn runs this route. So the problem with that is that, from a security point of view, it's bad. Even the rare case, and the attacker manages to break out, then they have the holy grail, right? Then they run this route and then it's really hard. But also there is a common, it's quite common to mount certain paths or even the Docker socket into the container or into running containers. And then effectively, you grant the containers access to the entire route. So it's just, it can be very difficult to secure these deployments. Potman on the other side does not implement server client architecture, but really a very unique traditional fork exec models such that all containers that you run with Potman are child or grandchild processes of the initial Potman command that you, for instance, typed in your terminal. So Docker, sorry, Potman is not a demon, but voila. If I'd say it's 100% demon less, then it would lie a little bit at least because we still need a process to monitor the container. It's not a demon entirely, it's called conmon short for container monitor, which is started before the container and ends after the container. It also exits with the same exit code, which makes it a pretty, which lets Potman integrate very, very smoothly in system D. So keep in mind, there's this common process, but only for one container. So it's not managing multiple ones, it's only managing one container, which comes in pretty handy when running in system D. Next slide, please. So when I was just mentioning it, Potman integrates very smoothly in system D, which is one of the cornerstones for, you know, running Potman in edge computing. So when it comes to edge computing, this is something very, very dear to my heart. I'll try to make it brief, I could talk about it for an hour, but I wanna leave Estefan enough time for the demo. So there's a couple of things when it comes to running containers at the edge. So that means outside of our traditional server room or cloud environment, that could be a car, it could be an IoT device. So these deployments need to be reliable, we want to be hands-off, we can't just SSH in or sometimes even the network connectivity doesn't allow it. So we need a lot of automation and also these workloads need to be self-healing. So if there is an error somewhere or a bug, you know, if the workload goes down, this needs to be started automatically, it needs to be detected, it needs to detect whether the workload still running is actually healthy or not, and also update it automatically. If you're interested in that, next slide please, we block a lot, really a lot, and we did a lot of things in this space. So go ahead and look, we blocked a lot on Enable Sysadmin, well, lots of Frathead and open source blocks. But the coolest thing in this space is Quadlin. Quadlin is something that we develop together with the auto team. So now, Potman is running in cars, which is pretty dope. So initially, the workflow, when you want to run Potman in SystemD, generate a unit, wasn't necessarily where we wanted to be. So then Alex Larson from the auto team came up with Quadlin, which you can see this Kubernetes YAML or even a compose file, but for running containers in SystemD with Potman. So here in the middle, you can see a table, which Quadlin adds to the syntax of SystemD, and I can specify an image, volumes, exec, all kinds of things. So this is really, really nice. And Quadlin takes care of generating all the complexity of a SystemD unit file to run Potman smoothly, reliable, and all these things inside. So if you're interested in Quadlin, there's a couple of blocks outside, one written by Dan Walsh, which I think most of you know, and Eagle Bloom, brilliant engineer, also working on auto and many, many, many other things. So I think this is pretty much all from Myset here, and I hand it over to Stefan. Yeah, thank you, Valentin, for the introduction of Potman. I think this gives a good context of all the different capabilities that Potman is looking after. Right now, I'm going to transition to Potman Desktop. So Potman Desktop is a native application that provides an easy to use user interface to work with containers, but not only works with containers, it also enables you to more easily transition for containers to Kubernetes. So I'm going to introduce the tool, and that's very important, because in fact, when we think about local developer environments, I think we can agree that they become impractical and they lack of consistency with production. In fact, there's a lot of complicated setup when you need to get the environment up and running. And your laptop is naturally not able to run all the things that you will be running on production. You have limited resources. So it's also lack of consistency because the way you run things on a local environment are unlikely to be the way you run the things on a production environment, especially if you are running them on Kubernetes and OpenShift. We know there's a lot of many, many different pieces at work that are difficult to reproduce on a developer environment. And there's also a lot of discrepancies between the way the containers and composite applications are being created and configured to talk to each other on a local developer environment and the way they will be configured on a production environment. So to solve this, in fact, a lot of developers end up using things like Docker Compose to group applications. And in fact, it brings you in the worst of both worlds because the developers are using a technology that then needs to be translated for running onto a production environment. So this is hard. And we know that it is hard because there's also a gap in the skills that the developer have on Kubernetes in general. And that creates some disconnect between the developers on one side and the ops on another side. So that challenges from local developer environments to deployment onto production. And on your local developer environment, you may use base images that are coming from different sources with low or no security from using also different container registries. You may end up using Docker Compose as well. But on the other side, on Kubernetes or OpenShift, you need to use different type of base images. Maybe those have been created by your upstream. You may use different container registries and you will have some security constraints which will be enforced to the workloads as well. And then the way you configure the application to be running on Kubernetes obviously is going to be with Kubernetes emails. So in fact, there's discrepancies to go from developers environment to production but there's also discrepancies when there is a bug that is happening on production. And when you try to reproduce this bug on your developer environment. So as a result, there's an adoption barrier of the technology in this context. So with Spodman Desktop, we simplify the workflows and the experiences working with containers when you are targeting Kubernetes or OpenShift. In fact, we are bringing Kubernetes and OpenShift closer to the developers and we are also trying as much as possible to minimize the discrepancies between the desktop environment, the developer environment and the target Kubernetes environment. So we aim to bridge the gaps between developer environment and production so that when you are working locally, you have an environment that is as close as possible to production. And in fact, with Spodman Desktop and you will see that in the demo but you can start with containers and they can then be translated in pods natively in Spodman because that's what you saw in the intro from Valentin is that you can run pods with Spodman and in fact, Spodman also provide compatibility with other Kubernetes objects. So once you have these pods running locally you can then transition more easily to a Kubernetes pod. You can run that onto a Kubernetes local environment or into a Kubernetes remote that you can connect to as well. So Spodman Desktop 1.0 has been released a month ago. It provides capabilities to install and configure the container engine as well as Kubernetes local. You can install it and run it on Windows, Mac and Linux. It will provide capabilities to work with containers. There's also a bunch of capabilities related to enterprise security and running such a tool behind your infrastructure. And it also provides capability to bridge between your local environment and remote environment. And we will see that in the demo just now, just now. So I'm going to stop presenting and present my entire screen. I think there we go. So this is the dashboard of Spodman Desktop. On this dashboard, you can see that you have Spodman which is running. I have access to the list of different images that I have been pulling in my environment. And if I take this one, which is an HTTPD image, I can see that summary, the story, and I can also run it. So I can, for example, just run it here and it will be shown in the list of containers that are running here. So I can access the container. I can see the logs. I can also access the terminal. And in fact, let's see a little bit. I have another container which is running. It's a pretty stack container. So there's a bit more logs as you can see. I can access to the terminal and I can actually interact directly with what's running into the container. So I can set up my key and I can do a get definition and boom, I have hello. So that's very handy. I don't need to remember what is a command that I need to do in order to SSH directly into the container. So that's already one capability which is pretty nice. I have also the ability to build an image so I can choose my container file or my docker file directly and I can build it. So here I can do create.io search, donation image and I can build my container. So we will see the image building and it's probably going to run in background but once I will have this image, most probably what I want to do is I want to push this image to a registry. So inside of Podman Desktop, you have the ability to connect to different OCI registries. Could be Quay, could be Docker Hub, GitHub, but you can also bring your own OCI registry so it can be an artifactory, if you want it can be any kind of OCI registry will be compatible here. So once the image will be built and I should be able to see it in a moment just here, I then have the ability to push the image so I can directly push the image on Quay. So it's going to take care, ah, I'm not properly authorized. So I'm going to just reconnect, move. Apparently I'm not able to connect. Oh, that's because I choose, here you go. I'm connected and I should be able to push my image now and if I go to Quay.io, I will be able to see my definition image. So that's Andy. Now what I would like to show you is a different setting. So as I mentioned earlier, you have the ability to configure your proxy and your VPN. You can connect your different registries as well. And you also have preferences where you can configure a bunch of different options for the tool. You can configure the size of the editor. You can configure the different options as well for the application. There is something which is a little bit interesting with the tool is that you also have the ability to run pods. So Podman provides the ability to run pods. So here I can see that I have already started some pod but I can also just take a Kubernetes Yammer for example and I can say, hey, I want to run this Yammer file with Podman. So I have this and I can see my pod running here and I will be able to also access to the container, access to the terminal. So that's pretty Andy. The thing which is also interesting is that if I have a container, I have this cube tab which gives me the Yammer that I can use to run this container in a Kubernetes environment. So that's also something that you can use. You can take this Yammer file and you can just apply it onto a Kubernetes environment if you want. Now I have an application. It's basically an application which is built with two containers. So it's my Python application. And if I click here, I will be able to see it live. So it's just a basic application which is showing how many times I have been visiting this website. It's connected to a release database that I am using as a cache. But what I want to do is I want to run that in the context of a future deployment onto Kubernetes. So I have the ability to actually take those two containers and to modify the containers. So I can take my two containers and run them as a group of container under the umbrella of a pod. So I can do that. And here I'm asked what parts I want to expose outside of my pod. So I don't want to expose my release database. So I'm going to do that. So what it does is that it stops my containers and it recreates them as part of a pod. So here you go. I have my pod. If I access here, I can see the logs of my different containers and I can see my different containers running inside of the pod. And if I click here, I should be able to also access my application. And you see the content is reset to one because it's a new start of the application in fact. So that's good. It's running locally just with Podman and the application need to go from containers to pod. Now there is something that we are providing within Podman desktop. There is a concept of extensions and extensions are providing capabilities to support other container engines or Kubernetes distribution. So what we did is that we integrate kind. So for those who are not aware, kind is Kubernetes in Docker. So it runs, it set up a kind cluster as a container running locally. So here I'm going to create my cluster. I can access the log and you can see that control plane either is starting to retake a few seconds. And I will have the ability to get my cluster running locally, storage case. I have my kind cluster which is set up. So when this is set up, I have the kind cluster which is accessible from the list of containers. And here I can see the logs of the cluster. But interestingly, I can also do cube cluster inside of the container and I can interact with the cluster if I want. So let's say that now that I have modified my application, I want to test it onto Kubernetes. So I can take this container and what I'm going to do is that now I'm going to cubify it. So which means that I'm going to generate the pod and deploy it onto Kubernetes. So I'm generating the Kubernetes manifest. And here I have also some options related to the services and I can create the ingress for my application. I deploy and now what it does is that it deploy my application directly onto my kind cluster. And in fact, if I want, I could have take this image and I could have say, oh, I want to push this image to my kind cluster. So when it will be done, I will have my application running onto kind and I can see it directly from here. So I can see the logs as well for the application. And if I connect to the port of 1980, I should be able to see my application which is running here in kind. So that's pretty cool. So from the from Podman desktop, I have been able to go from containers to pods running locally with Podman and now onto a Kubernetes environment. We are also providing an extension that allows you to run OpenShift on your local environment. So if you are interested by OpenShift, you can install the extension directly from the list of extension. And then when you configure the extension, you will have the ability to set up which kind of presets and which kind of OpenShift local machine you want to run. It could be OpenShift local with a single node cluster or it could be based on a on micro shift which is a little bit later as well and faster to start. So I'm going to just start it. So it's going to start from here and you will see that my Kubernetes context is going to switch from kind here to micro shift. So I will have micro shift on start here and I will be able to do the deployment of my application directly from Podman to micro shift as well. I think I'm running a little bit out of time. So I'm going to get back to the slides. So that was a demo for Podman desktop. So you can see that you can go from an application that is running in containers to a group of pods of containers running inside of a pod and then go to Kubernetes or OpenShift very easily in fact. The tool is free, it's completely open and extensible and this is by default. And in fact, the way we are building it is with different components. So on Windows we are using WSL for the virtualization stack, train you on Mac and on Linux of course it's native. There's a lot of possibilities to extend Podman desktop so you can add custom actions, you can add menus, different configuration, default registries, status bar and you can even extend the system tray if you want. But it's also providing the capability to support other container and join such as Docker, Lima or other Kubernetes provider kind and there is one which is coming for MiniCube as well built by the community. So that's also the different capabilities and how we keep the tool very open for everybody. In terms of future capabilities, we are going to continue our effort on providing efficient developer flows when it's about working with containers. So we will simplify the experience for onboarding especially if you are coming from Docker. So we will show you how you can configure the Docker compatibility which allows you to simply use Podman as a dropping replacement if you want. We will also have support for the native hypervisors such as Hyper-V on Windows or VFKit on Mac. On the Kubernetes capabilities, we will continue improving the flows for podifying. So turning containers into Kubernetes objects and then running those Kubernetes objects onto Kubernetes environments. We will extend the ability to see the different Kubernetes objects directly from the UI of Podman Desktop. And we want to simplify the transition from compose to Kubernetes. So a lot of developers are using compose today but they also need to run their application on to Kubernetes. So we want to help them along this flow. On the OpenShift support, you have an extension today to work with developer sandbox. You will have the ability to create your account. We will integrate an image shaker. So when you run a container on OpenShift, there's a lot of prerequisites in terms of security. So it's all about safeguards. And we will provide tools to help you making your image compatible with OpenShift. And we will continue our exploration to integrate MicroShift for developers so that we can provide a tighter integration with OpenShifter as well. If you are interested, you can find the links here. So podman.io, you will find everything we have been discussing today. There is also a book that is available on developers.radat.com, podman in action. So you can download it from there. And I don't know if we have enough time, but a big thank you. And if there are some questions, we are happy to answer them. Yeah, so thanks a lot, both of you, Stefan and Valentin, very great talk. I'm obviously a little bit biased, but I really love podman. Just in the fact that it has a very nice UI, like a very nice UI, and I love the integration with Kubernetes and stuff. Like it's really a game changer for me personally, but yeah, really great stuff. But yeah, let's get to the question. We don't have a lot of time left. One of the question was, does podman also work as a docker-composed drop-in replacement? So we are providing compatibility, which means that if you have docker-composed installed on your environment, it's completely possible to use it in order to run your composer files with podman. And in fact, inside of podman desktop, you have a way to configure that. I have not demoed it, but it's also possible to do this. Cool, great stuff. The other question was, is it possible to divine parameters like memory and disguise through a container on podman desktop? Yeah, you have advanced settings. So when you are configuring the container, you have some advanced settings that allows you to do that as well. Yeah. Cool. And the last question which has been advised was, is there like, does it also work like if you're using WSL, like the Windows subsystem for Linux, if you're using portman in there, does it work for docker-portman desktop as well? In fact, yeah, that's the way we communicate to podman. And that's the way podman is running on the Windows at the moment. So podman runs inside of WSL images and then podman desktop communicates to podman and Drain running inside of WSL. Yeah. Sorry to keep you. Sounds awesome. All right, so that's for it for the questions for now. Again, just to emphasize again, later on in our main stage, you can ask all the different questions you might still have, get in touch with all the Red Haters there and share whatever you have in mind. And even then beyond that, you can always reach out to the Red Haters and get in touch with us and talk about all the awesome stuff. So thanks to everyone until here, especially for Sivan and Valentin for this one and Maria for the last talk. We enjoyed both of them. And I'm going to hand over to Noel who is going to do the moderation from now on. And I am going to look into the chat and watch all the sessions from now on as well. So enjoy everyone. We have like a five minute break again and we'll see us like 15 parts. So thanks again, everyone. Thanks for having us. Thank you.