 Okay, so introduce, if you don't know who I am, my name is Dan Walsh, I lead the container based on chief architect of containers at Red Hat, pretty much everything that happens underneath Kubernetes. So, from Kubernetes down, I don't do anything up at the Kubernetes level. But the main, one of the main tools that we've been working on over the last few years is a tool called pod man. If you don't know what that is, pod man is basically a tool for running containers locally. And the pod man project is incredibly popular. So to give you an idea of statistics pod man right now has about 75,000 stars, there's over 800 forks of it. And if you see on, you know, it's a it's a fairly active project, lots and lots of community contributors to give you a few more stats pod man is often often compared darker. And this is basically what happened in 2020 pod man had has had about 139 different authors. A few thousand commits. We've had recently about twice as many issues worked on then darker. And we've had four times as many mergers have gone into it. Also, when you look at these stats, comparing the different projects pod man is built up of a whole bunch of other projects. So there's also container storage containers image containers common, but we didn't gather all of those statistics, but you can see it's very, very, very active. The pod man mailing list has about 150 members right now. It's a fairly low volume lists or advice you guys to get on to that if you go to pod man.io you can follow the instructions to get on it. It's mainly announcements, but there is some discussion from people asking, you know, how do how do I do this? How do I fix that? Most of the discussions on pod man also go on IRC at pound pod man on free note. There is a lot of and a lot of communications go through GitHub. So GitHub issues is a lot of stuff. So, but most of you guys are here to talk about the new features in pod man and what I what I looked at when I was starting to throw this together was I went to GitHub and I looked where we were at dev car flash year and basically went through all of the pages and pages of fixes and stuff that have happened in the different releases. I think there's been about 10 different releases. Since last year, last year we were about one point pod man 1.8. As of this week we released pod man 3.0. And one of the biggest features of pod man 3.0 was to introduce a rest API. So, traditionally pod man a year ago was using a tool called violent for communications between to allow sort of remote applications to launch pod man containers. And in pod man advertises itself as being a serverless environment and what we what we use as a system D socket activation for watching containers so pod man will listen pod man will use a system D to listen on a socket and then individual containers you can communicate with that socket to launch containers. So what what the rest API we decided to do two different rest API's two different endpoints and we have basically a compatibility mode or you might think of that as the darker mode. And then we have live pod mode so that really the live pod mode is more about the advanced features things that you know we you know things like pods and different features of of pod man that aren't available. Using the traditional mode. We're also wrapping those so we have for the rest API. There is a project called pod man P why that's being worked on in the upstream. And it's it's actually somewhat in the fledgling mode. So pod man P why we really need contributors would really love to have people come in and help help build out the API to basically wrap the rest API inside of Python. With our compatibility layer now we have full support for Docker P why. And the goal here was to get your goal is to allow people to replace Docker with pod man and a lot of people have built application CICD systems different tooling to talk to the Docker socket and a lot of that it was based on Docker Python bindings Docker P why. So we actually, you know, want to test against Docker P why a lot to make sure that we're doing everything. And we actually have Docker P why testing run in our upstream CI to make sure that we don't break anything. There are a couple of features in the rest API that we don't currently support and don't plan to ever support. And the two main ones are link which has been link has actually been deprecated by Docker. So we don't plan on never supporting it and then Docker swarm we don't support at all and any of our tools and reason for that is we believe that Kubernetes is the future and we really want to guide to people towards Kubernetes. So at this point I'm going to do a live demo. And I figured the best way to do a live demo of the API is to use to use Docker client. So I'm going to use the Docker client Docker CI and I'm going to show you that system status of Docker. You see here the Docker is not the Docker demon is not running. And what are we done is we set up a link between my run Docker socket so basically system podman is going to be listening at the podman is going to be listening at the API endpoint. So if I'm going to do a Docker endpoint right now if I do Docker version, you'll see that I'm running the Docker client here. But on the service side, you'll see that it's using podman to answer the communications with it. And if I did podman p s dash a you see podman containers. If I use Docker p s dash a you see it's using that similar I can do Docker images. I can do Docker inspect of an image. And as I said, if you looked at the system right now, you'll see no Docker running out. So basically what's happening here is the Docker client is implement is talking Docker API fully to a podman backend. And obviously you could do this with podman with the Docker p y and all, you know, and all of the types of communications back and forth between. Yeah, other tool other tooling that you might have an existing system. And we're having lots and lots of people in the community are running tests. I think GitLab is now you can do GitLab runners, which are based on talking to the Docker socket. We've had people running all sorts of containers that talk to the Docker socket and just leak in the podman socket. And really the goal is to be as compatible as possible. If something breaks, then we consider that a bug in podman and we will investigate and fix it. So that is the quick overview of the rest API. But one of the goals of the rest API was actually to get us to support get us to the point where we could support Docker compose. So Docker compose is an incredibly popular project out there. And a lot of people, you know, sort of live and die with Docker compose. And we look at Docker compose as a good way of running multiple containers on a single platform, although we still would encourage people to use Kubernetes for doing that. A lot to use podman has full support for a thing called podman playcube, which will take a Kubernetes YAML file and launch multiple containers and multiple pods off of it. But because of the install base of compose, we wanted to, you know, we obviously wanted to be able to support those use cases as well. Now, a couple of things about the compose demo we're about to do. First of all, it's is rootful only at this point. We plan on changing that going forward, but as a 3.0, we only support rootful. And the main reason for that is that there's major networking differences, network stack differences between the way the Docker demon works and the way podman works. Podman uses a thing called CNI, which is based on the same tooling that Kubernetes uses for setting up networks and Docker has more of a built in environment. And when you go to rootless containers, we have to use a totally different stack because a lot of stuff that you want to do for setting up networks and IP tables, rules, things like that are not available to rootless users. So we have to continue to work through the way that compose could work in a rootless environment. But compose is also tested. We have lots of tests. Actually, we use the upstream Docker compose repo and we run all of those compose repos against on every pull request to make sure that we continue to support compose directly. Supposed to have a demo slide here. So I'll go back to demoing. Okay, this one is I'm showing I'm gutless. So I'm going to do a asking actually going to do a oops. Then there was a request in the jet to go a little bit slower in the terminal because there's some delay in the stream. Okay. Well, this one's going to be automated. So I won't be able to slow control the speed. Okay, what's so what's happening here is we're actually about to start up the pod man socket. So this is the rest API that just get started to set the socket so it's a socket activated system and we're going to show that it's running on the system. And that shows you the symbolic link on by run Docker socket to be viral. Like pot pod man sock showing you the version of docket compose that we're running. And then these these are actually from the upstream project that underneath the Docker repo and we're about to run the application. So we're doing a Docker compose up, which is going to launch containers. One of the interesting things about compose is that compose actually can do builds at the same time. So what you're seeing right now is compose is actually building the image and it's using pod man build, which indirectly uses build up to actually build the image on the system. And so that's what the delay is right now. So we're completing, you can see that it's it's finishing out writing the image to the system. And these are all different output that you get from compose. And at this point compose tells you that it completed and it's launched multiple containers. So now we're going to use the pod man command to show you the two different containers that are running on the system that were launched by compose. And you can see that traffic brings up a additional network here and we're listing out the networks that are available to that been created by the compose client. And at this point we're going to attempt to connect to the traffic to show that the application is actually running and it says hello from Docker and of course that's pod man lying. It's actually it should be hello from pod pod man because there is no Docker running on the system. This point we're doing shutting down the docket compose on the system. See the containers are gone. And now that network interface has also been disappeared from the system. And that is the end of the demo demonstration. So that is compose running and again any any bugs that you find in compose or that relates the API we plan on fixing the only again as we talked earlier a lot of compose has been tied into swarm since we don't support swarm we won't be supporting those those type of compose scripts but for the most part if you don't if you compose script doesn't use linking or swarm then it should work with pod man back end. One last is we talked about the Python bindings to the rest API with we have to obviously we have the Docker for the compatibility layer and we have pod man P Y for the live pod layer. We also have go bindings for testing to work with our remote API and these are based on the same tooling that pod man remote so we can use pod man on a Mac or a Windows box to talk the restful API and we're making those API with standard ice on those go bindings for that so that people want to build their own tooling to talk to the restful API can use the go bindings. There also was effort from community members to work on a Java API. We have a lot of discussions going on about a C C API and I think that this has been talked a little bit about JavaScript and some other tooling and people using it to implement the API's we also have a thing called swagger that allows you to build API's on the fly for different languages. Not something I'm an expert at but so swagger is available if you want to talk if build your own API's build your own wrappers around the API's. At this point I'm going to hand it over to Valentine to take care of this section. Thanks then. So, for those who don't know me. I'm mounting. I'm one of Dan's plumbers to say so working with Dan and the team in the community on the container tools and a couple of libraries. As I wrote a couple of features that then wanted to talk about he asked me to join his talk. So I'm happy to do it. And one thing that I've been working on extensively in the past is the integration with system D. So one thing that is very close to our hearts is to integrate potman as seamlessly as possible into a modern Linux desktop and part of that is system D. Dan and the team actually before I joined the team were working hard on getting this integration also partly into Docker but it was very hard technically because it's running as a demon. So it's really hard to integrate that into system D which really wants to manage all these resources wants to know which processes are running so if you have a client server implementation this is kind of kind of hard. Also, the maintainers weren't very interested in it, which is bad, because when, you know, when you want to install packages, many web servers, they really want to install their system D services. So if you do an DNF install HTTP D for instance, in most cases you'll need some system D somewhere. And this is or was bad because really containers were supposed to help us get things done faster. While at the beginning from packaging perspective, we were thrown back by a couple of years because many things just didn't work anymore. So potman approaches system D a little bit differently, not only because we really want to have a tight integration, but also because it's much easier for us because potman implements rather traditional fork exec model where containers are really children of the potman process. So this integrates just very nicely into system D which wants to know, you know, which processes are running in the service which see groups and scopes belong to the service to have a proper service and resource management, and also life cycle management. But integrating potman into or running potman in such a system D service is kind of tricky. There's a couple of things that really need to work that we need to tell system D. And so we came up with best practices and similar to allowing for potman to generate cube files. A potman can also generate system D unit files. So if you do a potman generate system D on a container or pod potman will spit out these dot service files and you can easily deploy them, ruthless and also as food. On the contrary, what I was also talking a little bit before is potman also runs seamlessly inside a container. So if you start, if the entry point of a container system D or in it, potman will set up all the different mount points, temp fesses that system D wants to run inside the container, and this will run just fine. You can also control it on the CLI. So if you do a potman run or potman create dash dash system D equals true. So when you set the flag to true, it will run as well. And with this integration into system D. This allowed for us to cover new use cases. And a really cool one that we worked on last year is potman auto update. So what potman auto update does is it checks the containers which are running in a system D service at the moment, and reaches out to the registries that the containers use you know the images that the containers are using and checks if there's a new one. If there's a new one, it will automatically pull down the new image restart the services. And voila, this is an auto update. So this is this is really nice to automate certain use cases and target use case. Or what we had in mind is edge. So while they're different definitions of what edge is, I guess most people can or would agree on, it's running outside of our data centers and it's pretty cool to see now where this is being used just two weeks ago. I was talking with another colleague who told me, well, this is now being used on oil rigs. So this is really the far edge you're somewhere on the, on the ocean, the connectivity can be bad, or not there at all. But by using system D and by somehow not re implementing the wheel at that point. This is really powerful and super, super stable. So to illustrate this a little bit. Thanks then we prepared a demo to show, you know, how you can use potman auto update. I was talking a little bit already about the points that the intro now. Now, it's talking about, you know, we really want to have this tight integration. And if you want to trigger up these auto updates, you can use the potman auto dish update command. Or, you know, you can also customize everything via system D timer unit pair that we use. So this allows for covering event based triggers, if you want to, you know, shell out to potman or use via system D dependencies, the system D the potman dash auto update unit. Or if you have a more time based approach, you know, you can use a chron chron like system D timer. So what we do now is we first set up a local registry, then we copy one image to it. So we want to simulate, you know, an update and we create a container based on it. So now we first run a local registry we copy just a very small image for the demo. So an older alpine 301. Now we create a container with this image. So it's important notice that there is the dash dash label IO containers auto update image. So if you you got to configure it in for auto updates. So this is something you got to opt in. So now we can generate a system D unit for this specific container and start the system D unit. So what we do. All right, little lack at least on my end. So we first did a potman generate system D we copied the file into dance home directory. Everything works ruthless by the way. So here we reload the system D demon. We start the container service. And now we have a look at the service and let's try to remember the main pit that we're seeing there. All right, so here we see that the service is running the main pit. This is con mon con mon is the container monitor. So it's, it's really small, small shim around the container written and see that potman and its sibling projects uses to run and monitor containers which keeps name spaces open does logging and also collects the exit status among other things. So if we run potman auto update. Well nothing, nothing should really happen because we. Oh, something was still happening I guess we should have changed the. We ran the demo before so this was, I guess a buck in this in the script we should have cleaned it clean it up properly before. We run it on a fresh system and nothing, nothing would happen here because we didn't update the image yet. So if we now update the image. So we would override it and then rerun potman auto update. Let's see if it's starting now. If the image was already overwritten before yes, this is the expected outcome. So potman auto update we can see that the image has been pulled down. And we will also write which system D service has been restarted and we can see here. The main PID has changed because we restarted the service. So what potman auto update does, it goes through the containers. It looks at the environment of these containers, and the environment of the container in the potman database knows which system D service it is running in. And then potman will reach out via debas to system D, and ask system D to restart the service. So, I think implementing auto update wasn't more than 80 lines of code. Otherwise, it would have. So if we wouldn't have this tight integration with system D, we would have to reimplement a lot of logic. And I think this is a nice, a nice example of how far we can. We can push certain use cases by, you know, just using what's there on the modern Linux desktop already. All right, another thing. So what we're working on, and which now made it into potman 3.0 is in we improve short name resolution. So when a short name is a reference on an image that does not point to a registry. So when you do a docker pull fedora docker will always resolve to the docker up. So instead of pulling just fedora will resolve to docker.io slash library slash fedora. They're colon latest so there are certain rules to resolve and normalize these short names. And this work worked well for a certain amount of time but after after a couple of years more more, you know, requirements will, we want to run or also resolve to the fedora registry send us registry redhead there's quay there, you know, all major companies and distributions have their own registry. So, potman and sibling tools allowed for using or resolving to more than that so you can configure everything and at sea containers retrospective conf if you're interested into that I will give presentation about that later. You know, this afternoon or morning or evening depending where you are just a couple of hours. So, to improve a little bit on that and also to improve the security of pulling pulling images we wanted to make this more explicit. So when you update now to put in 3.0 you will notice that when you're running potman in the terminal where if access to a TTY potman will prompt you and ask you which image you really want to pull from so that there's no unexpected surprise. So here just an example if you do a potman pull image column tag, it will ask you which registry you really want to pull down this image. If you have selected one image, and the pool has been successful, potman will record an alias for it. So, image will then be aliased with the selection that we chose before. So then if you can jump to the next slide. The feature in this case is also short name aliasing, which is now an additional additional field in the registries dot com, which allows us to configure it by default. So here there's just a snippet of the short names that are now being shipped in Fedora soon incentives and also in in rel. So there you're not prompted anymore. And below you can see the link to an upstream project and get up to become containers short names. So this is something that is a community wide approach where we were pretty happy that it was so well received by the community. Other companies so then they can really make official where they want their images to live to live on. So people are not locked in into one registry in this case Docker Hub, but they can really choose where where they want to. They are image to be pulled from so you know we have redhead for sure we have Suze Oracle couple of other Linux distributions Microsoft and if you're interested into it, I will also talk about this later on this this afternoon. I would just like to remind the time so we have like five minutes for the talk or something. We have time for the questions. But feel free to continue now. I'll handle. Okay, so we have 30 minutes left to presentation and we have five minutes left so I'll try to rush rush through some of this stuff. Basically, we've had lots of features for security. One of the things that Docker standardized on many years ago was to default list of capabilities Linux capabilities. I value that a bunch of these are. We really don't want by default so with 3.0 we're actually moving to drop some of these capabilities, the three that I'm listing here or three that I don't believe should be on and generalize containers. You can modify this. We have a new feature that we want to have time to talk about called containers dot com. You can actually specify yourself what the fault list of capabilities if you want to go back to what Docker originally specified but we've dropped those three capabilities. We also have a new feature. I'm in pod man. This actually helps people not run privilege containers. So there's certain use cases. One of the things that contain we do with containers is part of the proc file system is is sort of masked over where we hide certain features of the just to protect to be run the system more securely. And we mask over these file systems with empty directories that is just to prevent users your bad applications from interacting with them all we mount certain directories read only to pretend to attacks. But the problem with that is certain applications, you know, you might have an application that can run really well in a lockdown container, but just needs to be able to use one of these kernel file systems and so in that case the only option for users of these applications is to run the thing totally privileged. So we did as we exposed the ability to unmask paths. So if you you find out that this one path you need to be able to run then people building that image can say well just run with this option and will allow you to just that path. Similarly, we have added masks. So if there's parts of the operating system that you want to hide, or if there's parts of proc that you want to hide from users, you can actually mask add additional masks over directories. Also, the proc file system is has lots of different options for being mounted and you can go into man or proc FS and find those. And again, certain certain types of containers need advanced options of proc to be mounted in different ways. And so we've exposed some of that. The last thing from a security point of view, I've been talking about user namespace for many years and running podman and rootless mode takes advantage user namespace. But I'd really like to get people more people to use it in root full containers was like some of the stuff that balance and was talking about system D generated containers. And what the difficulty with the user namespace is you really want to pick unique groups of UIDs for running your containers that are different than a lot of the containers. And so we built into podman the ability to auto generate auto pick groups of UIDs for running individual containers. So with a flag like this, you every container you generate will run inside of a different user namespace. The next thing I want to talk about quickly as we run out of time. And sadly, I won't be able to show the demo for this is we wanted to make podman and build a better at building multi architecture images. And so now we have podman build dash dash manifest. So when you're building a multi arch image you you create at what's called a manifest list. And a manifest list is basically a way it's a manifest list out multiple different images to build up that so when the goal is is if you go to a registry and you pull down like UBI 8 and you happen to be on an S390 machine, you don't want to have to specify that you're an S390 machine and what you watch is the tooling, your, your, you know, Docker or podman or cryo or any of these tools to be smart enough to pull down the specific image for your architecture. Well, those images are somewhat difficult to build. And so now what we've done is extended build podman and build up to be able to build those images on the fly. So if you, and one of the cool things that's in Linux is the ability to build these images in emulation mode. So with podman 3.0, you can actually build on a x86 machine you could build an S390 version of your image a x86 version of the image in your arm version of the image. And then you could just use a single command podman push then to push the image to to the registry and and now, you know, people are different platforms of different architectures can use your images. But as I said, we're running way out of time here is other features one of the big features we added and a lot of this was to handle multiple different networking. So podman now has really extensive networking access. So there's a podman network command based on the Docker network command. We can actually add containers and move containers between different types of networks we can connect containers to different networks. You can also set up network aliases. So if you want to call a container, you can basically alias a container is being database, and then you could base, you know, run other containers on your system and then if they connect to, you know, database colon. They'd be able to interact directly with the database application and figure out which one of the containers was running and and that's all been wired into the system so this really some nice advanced features of networking between containers and and a lot of functionality has gone into fixing that over the last year. Lastly, I'm going to run through some new commands that didn't exist a year ago so there's now podman volumes. So there's a bunch of commands for creating named volumes on your system so you can create them inspect them, list them prune them. Also these support Docker had defined a protocol for basically Docker volume plugins and so now with podman can support those so if you have certain volume plugins that you want to use on your system that were built for Docker podman can now take advantage of those as well it will talk to protocol to those volumes. One of the features that we've been asked for for really long time was to be able to rename containers. And actually this was core turns out compose does a lot of renaming. So we finally have added the podman container rename functions to the system. Another feature really kind of cool feature now is we allow you to mount images so podman, I think a year ago was allowed to mount container images. And, you know, so you could actually just mount an image, it'll give you back a mountain point and you can go into that image and look around inside of the content on on your system so sort of being able to review it. So that was for containers. Now we have it for images as well so we can interact either with container. And when I'm talking about a container container is a non committed image. So you could look at running containers what's going on in its content but you can also look at individual images. And that's totally also works inside of the podman command where you can run a container. On you can run a container and the bottom example here where I'm basically running a UBI a container but I can actually mount an image from my image store into the container. So in this command the bottom command down here is actually showing you mounting the fedora image into that slash fedora image inside of your UBI a container. And imagine if you're doing a scan you could have a tool that scans containers or just does some kind of examination of images and you could run containers on that. So it's an interesting use case for using podman some advanced features. So I'm going to stop sharing now. And I think we might have three minutes left to answer any questions. You have seven more minutes if it's needed. And I think that Valentin already responded to some of the questions so I will leave it up to you. Just please start with the question at the bottom. I can handle it. So I'll jump to the ones that it didn't answer yet. Or they didn't reply. Thanks. Some others helped reply as well. Here there's one from Nick Piper. What do you recommend with podman to ensure one container instance is running on any of one of three rel hosts and for directing user traffic to that one floating container instance. Okay, so you've to me you're asking a question that I would say we podman is not the use case for that. So if you want to run containers on multiple different hosts and you want to make sure that there's one instance your application running on different hosts. You're entering in the world of Kubernetes that's orchestration at that point. So podman is all about managing containers on a single host. As soon as you go off host, you know, I think you're looking at a high level tool that's sort of going to coordinate those and the tool for that to me is Kubernetes or something. If you want to get a product as version of Kubernetes, then you go with something like OpenShift. You know podman has no podman has the ability to communicate with other podmans through podman remote. But you'd have to build high level tools on top of it to do that. Another interesting one. I already replied to it, but maybe you can iterate a little bit. Dan, I believe you stated potman allows to run multiple multiple images without using compose as well using Kubernetes YAML files. And the question is whether potman play cube requires, you know, Kubernetes micro ks or k3s for it. No, so what's happening with potman play cube is we're just taking the Kubernetes YAML and we're converting it into, you know, our API calls. So potman play cube and potman generate cube is just generating YAML files. You're basically using the Kubernetes YAML files and input to us to build containers. We have no integration. And podman has no formal integration into Kubernetes does not require Kubernetes. But our real goal with Kubernetes and we have a conference coming up a couple weeks and we're going to have a nice demonstration. So there's the podman, I mean the container demos days, which is I think March 8th and March 9th is coming up. And we're going to show a demonstration of someone going from compose. You're taking a compose script running containers and then using podman to examine those containers and actually generate Kubernetes and then allowing you to take those Kubernetes generated running inside of something like OpenShift on it. But really what we view podman as the tool for running locally containers, but then allowing people to easily get into Kubernetes. And so we really want to make sure that podman allows you to take sort of the traditional way you run containers like with Docker or something like that and be able to easily get to Kubernetes, which I find, you know, it's difficult to get into Kubernetes, but with that tooling. So that's the idea, but yeah, there's no, we don't require any other services to be running to be able to run podman containers with or without Kubernetes YAMLs. So to not only focus on the positive ones and get some critical feedback, also receive two thumbs up. One question by anonymous, are there any improvements in podman testing? I got hit by several regressions in the last year. I already answered that, you know, we make sure that we always had regression tests and in fact our CI is rejecting PRs from being merged without adding tests, but maybe you want to want to iterate a little bit on that. Yeah, it's somewhat difficult. One of the things a lot of people got hit by regressions or problems is usually on different distributions. So most of the core development team obviously works in Fedora world. So the Fedora is probably the most stable way to run podman, but we have obviously a huge user base on top of Debian and Ubuntu. And what I see a lot of problems happening is when we go to release the tooling that we build for on top of all the distributions sometimes is buggy for a week or so. And while we race through and fix those because we're not doing enough testing on other platforms, we actually test on top of Ubuntu and Fedora and our core testing environment. But, you know, the packaging, the guys we have packaging tools are doing things on top of Kubik project, and sometimes they miss, you know, the mishandle certain new features. And that's caused some issues. And our goal really is to get, we want to get out of the packaging business. So we're, you know, podman is now available from Debian directly, it's going to be available directly in Ubuntu. And then have people who are more expert in the different distributions to be able to package it. But yeah, as far as bugs, you know, we're racing ahead at breakneck speed. And sometimes we have regressions and we're trying, our testing is expanding usually and compose, I mean, and, but anytime you guys have a bug reported to us, if you can get a fix for it and get us enhanced testing, you know, testing is key to this, to a project, you know, a huge project like podman, it's, you know, it's key.