 So my name is Dan Walsh. I work for Red Hat. I actually, my official title is, well, my job role is a consulting engineer at Red Hat. My title is lead architect of the run times team at Red Hat. So we handle everything underneath Kubernetes and underneath OpenShift needed to run containers. So we have a whole bunch of people working on our teams. And we've been developing a whole bunch of, really, we should call them container engines. The runtime term is overused. Really what these are is container engines. And I see the low things like run C and Cata containers is really being the run times. So really these things are container engines. So this talk is called replacing Docker with Podman. Really what we, our view of the world is actually replacing tearing apart what Docker did into a series of sub-components. So Podman really is replacing the Docker CLI. So it's sort of the traditional way you run Docker commands is what we're looking for with the Podman command. So first you DNF install. I could have put app get up here. You DNF install Podman. Then you do this. Any questions? All right. And to show you that's true, this guy, Alan Moran, who I don't know a couple months ago said I completely forgot that two months ago I set up an alias of Docker equals Podman and it has been a dream. No big fat demons, project atomic. Down below, one of the comments to him was how did you figure out you were using Podman instead of Docker? And he said I did, I did Podman help. I had Docker help and it came up and gave him Podman's help message. So that's how we figured it out. So obviously I can't stop at that. So at this point, everybody has to stand up. If you've seen me do talks before, I make you guys participate. Okay. So please read out loud anything that isn't read. Excellent, nice work. All right, so containers is a Linux thing or it's basically a concept. And do you guys go make copiers and say I'm gonna make a Xerox? Or do you take a tissue paper out and say it's a Kleenex? Or do you take an aspirin out and say it's an aspirin? Oh, that's a bad point. But basically, and at this conference I cringe in the back of the room every time I hear someone basically use the D word. And I actually have another talk where I have a swear job that I put up and every time I say the D word I have to put money into the swear job. But I'm not gonna do that today because I don't have much money. But anyways, what do you need to run a container? Okay, what does it mean when I run a container? And the first thing you need when you wanna run a container is you need to identify what the hell is a container? And a container in this case has been standardized or at least the image format of what people mean when they say I'm gonna run a container. And they're talking about something that sits at a container registry like Docker IO or Quay.io or there's probably 100 different people out of factory who are all doing container registries. And there's these images or the tie balls that sit up there, I mean the container images. And a couple of years ago, thanks to CoreOS if you saw the talk earlier on Vincent's talk talked about sort of the history of container runtimes. CoreOS introduced the app C spec caused a fracture all of a sudden there were gonna be two different types of container images. I mean it actually forced all of the companies the big companies are involved in containers like big companies and startups to get together and say we're gonna standardize on what this what it means to be a container image. And that's where OCI came out. The big companies I'm talking about Red Hat, IBM, Google, Microsoft, Docker, CoreOS at the time who we've now acquired as most of you know got together and they standardized on what it means to be a container image. Last year, last December actually they came out with the OCI image bundle specification and now we sort of have a good idea of what it means to be a container. So I say I wanna run the Fedora container I know what I'm gonna get when I pull it down or I have an idea of what I'm gonna get. So the next thing I need to do is get a mechanism for pulling images pulling the images off of container registry to the host and again some of this was covered earlier this morning but basically we built a tool several years ago actually Antonio in the back of the room did to we actually did a pull request upstream because we found is that people were pulling images off of container registries and they were huge. I mean we're talking some images of 1.5 gigabytes or two gigabytes right these are huge images is a JSON file that basically describes what's in the image. So what we said is why don't we build a command like Docker inspect dash dash remote to pull down the JSON file associated with the image that I could then look at and then figure out if I actually wanna pull down the image right. The only way to get an image and look at what's inside of it right now is to pull it to the host. You have to pull that two gigabytes to your machine before you say oh that's not really what I wanted let me get rid of it. So we went to upstream with that pull request and they said sorry we're not interested it confuses the API or the CLI too much but they said but it's just a web interface right these are just container registries. Container registries are nothing but web services with tie balls on it so it's all web protocol so they said go off and build your own tool to go out and pull down the JSON and look at it with your own tool. So we built Scopio. Scopio means remote viewing in remote viewing in Greek and that's why we have a Greek hat and a telescope. And so after we built Scopio Antonio actually went off and he started implementing more of the protocols for pulling registries. So instead of just pulling down the JSON he also pulled down the image and he also figured out he could use Scopio to push images. And Scopio basically slowly evolved into this really cool tool that you can start to move images around the environment and you don't have to be rude to do this. So you can actually copy off a one registry copy to another registry without ever having pulled the image bundle to your host. So it really evolved into a cool tool and we were actually working with CoreOS before we acquired them and we were trying to convince them to use Scopio to move images in and out of rocket and they said well we don't really wanna be exacting a tool, why don't you make it into a library? So we created Container's Image. So Container's Image is a library that now is working independently. Other people are contributing lots and lots of pull requests into to be able to move images around. So it's really a base level image. The number one contributor outside of Red Hat is actually Pivotal who's one of Red Hat's biggest competitors in the open ship space. But Pivotal is using basically Container's Image for moving images in and out of, I think they call it garden is their source for that. So the next thing after you pull the image to the host you need to be able to explode the image onto disk. Okay and usually in a Linux world or in a container world we call that we use copy on write file systems. Because basically an image tends to be a layered thing. So you basically wanna install a layer of an image then you're gonna create another type of file system or another mount point on top of it. You put the second layer and you put another layer on top and to do that you use copy on write file systems. So way back when we were first starting to work with Docker Red Hat, actually Alex if he's here somewhere did most of the work, the guy that's in charge of Flatpak now basically introduced a whole bunch of different types of copy and write file systems into what was Docker at the time. That was overlay FS, butter FS, device mapper. And so what we did is we took all that code and we moved it into an independent library. So it was independent from basically the upstream Docker project and then we went and started to evolve that library. So all these things are independent. Next thing you need to do, so you get basically you define an image, we pull the image to the host, we store it on top of some kind of storage and then we need a standard mechanism means what does it mean to run a container? Well luckily OCI standardized on that. So the OCI specification was a runtime, the second specification was called a runtime specification which basically says I'm gonna write a JSON file, everybody has to understand what that JSON file looks like and then I need to basically launch a program that reads that JSON file and creates the container on the system, all the container processes, sets up the C groups, security settings, name spaces. So that's the last part of running a container on the system. One other thing we needed is we needed sort of a monitor, when I'm running a container, a container can just exit, okay? So when a container just exits, right? It doesn't know it's running in a container, it's just a process on the system. So you need something to actually watch that process. If the process exits, you wanna grab its exit code and store it somewhere, okay? Put it on the system, you actually wanna keep open the TTYs that are connected to it because people are gonna come to you and say, hey what's going on inside of that container? So you need a process that sits out there and sort of monitors it. And that's called conmon, all right? So if you went to the earlier talk by Antonio, he talked a little bit about conmon and how it's used inside a cryo. But conmon is a simple C program that basically is the parent of the container. So a parent of the PID-1 inside the container and just sits out there running till the container exits, catches a sick child and then it exits. Well, it basically stores away some last minute stuff and exits. That means that any one of these container engines that we're gonna be talking about like cryo and like podman can go away, right? They don't have to stay running on top of the container to watch what's going on. So there's a little conmon process out there running. Did I skip ahead? Oh yeah, I just explained conmon and the CNI was up there. Sorry about that, someone should have said that. So CNI is basically a way for, it's an interface again introduced by CoreOS that basically defines a network protocol for the container engines to set up networking. So other people can come in and basically get a plug-in interface to plug in different kinds. And it's really heavily used inside of Kubernetes. We use it inside of cryo. We're gonna be using it inside of podman. So basically all different types of people, tooling can basically build a CNI plug-in and then we can use it with these tools. So basically we have the four building, or the five, five, six building blocks here that allow us to experiment with different types of container engines. And we're not talking about a container engine. That is something that you talk to and say, pull me an image, so I know what an image is, pulls it down to the disk, puts it on to storage, configures the Run-C specification or the OCI specification, and then launches the runtime. Saves data like what happens when the container exits and reports it back to the human. So it's basically it's a human interface or the tooling interface for running containers. So one of my problems with Docker is it has become a big fat container demon. It's become basically a roadblock for innovation. Having to have a demon to basically launch process on a Linux system just seems to be wrong, right? Everybody that runs the Docker CLI thinks that the container is a child process of the client. What's actually happening is the Docker program client is talking out to the server, talking out to a demon and actually the process that gets launched as PID 1 ends up being a child or a grandchild of the Docker demon, not of your process that launches. I'm gonna show you some interesting things about that. But what's also happened is if you have only one way of doing containers, it stops all innovation, right? If I want to do some special things, if I want to move those container images around, I have to go to the one entity and say, may I please do this? And they say that's not really of interest in this upstream project, so you get denied. So by breaking it apart, we basically get the best of both worlds. All different tools can basically contribute to these different components and all of a sudden you can start to build some interesting tools on top of it. So this talk is talking about Podman. So Podman means, does everybody here know what a pod is? All right, in Kubernetes world, Kubernetes launches pods. They don't launch containers. What a pod is, is a process on the system that has one or more containers inside of it. So what Kubernetes wanted is they want to launch these multiple, one or more processes all locked together in the same namespaces and then be able to move them around the system. If people came the other night, there's been some talks that talked about these sidecar containers. So you might have your primary application and you might lock into it another container and that second container is basically monitoring the first container, basically figuring out whether or not that container is or if it's doing something on behalf of that container. I think someone was talking the other night about basically it does all the authorization for it. So the application, the primary application can do stuff and not have to worry about authorization. It's just a sidecar container that does it. So Kubernetes wanted this concept where I could run more than one container at the same time around the environment and it just manages pods. So when we built Podman, Podman is part of our libpod effort, which is basically a library to build pods and so we wanted to build Podman as a tool for managing pods and containers in the environment. But what we didn't want to do when we built Podman was basically give you a brand new UI or CLI. So we wanted to basically, we started out by copying the Docker CLI. So to run commands inside of Podman, it basically uses pretty much the exact same CLI that you use when you run Docker. So if you find any Docker command in the world, theoretically you should be able to just substitute Podman for it. So lastly, before I get to the demo, we show this is the architecture. Again, this is the same architecture that we shown earlier for the same picture that was shown earlier for a cryo. So we have when I'm running a pod, I have a series of conmons. So when I'm running Podman, it's gonna go out and create this environment. So I have a conmon that runs. So if I'm running a pod directly, I will have an infra container that basically just holds open all of these namespaces and C groups. And then I have one or more containers and running inside of it. So Podman can run pods, but it also can run regular containers. This is sort of the traditional way you run containers. So at this point, we're gonna demo it. By the way, the icon here, a group of seals is called a pod. So that's where we got the. Okay, so first, first we're just gonna do a Podman version. We're gonna be running sudo to run it as root. And of course, like any good security engineer, I don't have sudo without password. So here we go. We just launched this is version 0.9 of Podman. It's all written in go because that's what the cool kids wanna write in. And what the reason it's 090 is this was, well, we've been releasing Podman on a weekly basis. So it's not 1.0 yet. And the nine stands for the month and the one stands for the week. So this is the first week of September version of Podman. And so I'm gonna show you info to give you, so it's using container storage. Containers are stored under ViLive containers. We basically have some additional features up above. You probably, I scrolled off the screen, but I'm not gonna go up cause I'll probably screw it up. But it's running on top overlay file system. And some neat things that we've added is actually, this is something different than Docker. From a security point of view, we actually mount all the devices with no dev by default. So even the images and that showing you that you can pass in special overlays. But this is all stuff that's built into container storage has given us all sorts of new features that we can take advantage of. So I'm gonna cat out a darker file. So we're about to, this is a darker file that I used. What I'm about to demonstrate right now is actually Podman running a container that has Builder, which is Builder is another one of our projects, which is a container. It's gonna run a Builder and build a darker file, build based on the top of a darker file inside of a container without giving any privileges to it. So here we go with the demo gods, hopefully this. So it's gonna pull down Alpine image cause that's the smallest image available. If you look at it, we're actually volume mounting in here. I have a MyVol directory, I'm volume mounting it in. I'm actually using SC Linux, so we're changing the label on it. And then I have my containers, I'm bounding that into my live containers. So Builder inside of the container in this case is gonna be writing to a host directory on the system. And then I'm using a VFS, so I've changed the type of the storage and there it is. So it's now finished, it actually built a container image with no big fat demons. So there's no demons running on the system, there's nothing up my sleeves or whatever. Basically I ran inside of a container, another container, inside of a lockdown container that basically built an image, now that image could actually be pushed to a container registry all without any special privileges. So you can imagine when we talked about Cryo earlier, you can actually use tools like Podman and Builda inside of containers that'll lock down. So you can actually run really interesting workloads inside of a Kubernetes distribution, this kinda shows it. So here I'm gonna actually, I'm gonna show the image that was just built. So, oops, that's interesting, it doesn't enjoy that. But here we have, I pulled down an Alpine image, it actually had a couple of layers that got installed and then basically it ended up creating my image inside of it. So clear that screen. So interesting thing here is one of the things I get asked often when I'm supposedly a security guy, when I go to customers, customers are all worried about their engineers that are coming to them all the time and saying, I gotta build this Docker thing, I gotta build this Docker thing and I have to get access to the Docker socket. So the Docker socket is running its root and you can actually do things like Docker run dash dash privilege dash v slash colon slash host, fedora cheroot slash host and boom, I have full root on the system. It's actually worse giving someone access to the Docker socket is that worse than giving them sudo without a password because there's no logging. As soon as I screw around in your machine, I can go and blow away that container and as soon as I blow away that container, all the logging gets eliminated from the system. So what they wanna do is they wanna be able to use the Docker CLI on the system without requiring root. So here we are, I'm about to show you running podman without root. So I'm gonna pull an image and again, hopefully the network stays up. Everybody that's doing a YAM update on the system, please stop right now. So it's gonna pull down an image to the system. Usually this is very fast, but this network is. Talk amongst yourselves. Right, so this is podman running without root. There is no demons out there and what happens when I run podman as a non-root user, it actually creates the storage. So instead of the storage being out on via live containers, it actually ends up being home directory dot local slash share slash container slash storage, I believe. Okay, so actually I'm gonna show you that I just pulled down an image to my system and just for, that shows you the images on the host. So there's a lot more images on the host. There's only one image in my home directory. If I wanna run a container on top of, so there I just ran alpine container, ran the LS command inside of a container in my home directory. No root needed. So what are we doing to cause this? So how are we actually doing this? We're taking advantage of the user namespace, which most of you have never seen before or at least had very little exposure to. So we actually have a tool on the builder called builder on share. I just wanna show you, it basically puts you into a user namespace without being inside of a container. So I am, right now I'm logged in as DeWalsh on the system. That process right there that now that's showing you the root on the home directory, that's DeWalsh. Okay, so it basically did a swap and turn. So what happens in the system is this is actually a program going on here. So if I did ID right now, you will see as far as it's concerned, I am running as root on the system. If I did a cat of proc, UID map, you would see the mapping that's going on inside of the user namespace. So if you're logged on to a Fedora system and probably a bunch who has the same thing, Shadowutils now puts every user that logs into the system and puts an entry into Etsy sub UID and Etsy sub GID that defines the mapping that is available to this user on the system. So my system, you see as I said, my UID is 3267, it maps UID zero to that. And it said there was one mapping in the range. Then it says, I'm gonna map UID one to 100,000 and I'm gonna do it for 500,000 UIDs. So that means in my home directory now, I can map 501,000 UIDs. I can create UIDs from my own UID as well as 100,000, 100,000, 100,000, 100,000 two all the way up. And those would be mapped to one to 500,000 inside of the container. Pretty cool, huh? It leads to some interesting problems though. I can create content in my home directory that I can't delete after I do it unless I'm in the user namespace. If I exit the user namespace back to my regular UID, I have files in my home directory that I can no longer delete. So I actually just wrote a blog on this past week. Okay, so Podman has some interesting user namespace. So user namespace has always been this nirvana for container isolation. I just showed you how you can use it in a home directory, but what would be really nice is if I could use it on the system for separating containers. So if I run, right now when I run Docker or Podman, I'm using different SE Linux labels for each one of them. That gives me isolation. With user namespace, I want to be able to isolate, basically have this container, have this range of UIDs, have this container, have this range of UIDs, and therefore if I broke out of the container, UID 5000 would not be able to interact with UID 6000 on the system, right, just like normal UID separation. The problem is the reason we've never used this, using namespace predateStarker, but up to this point, there's never been a file system support for this. So user namespace has been really, really cool, but no one's ever used it. So what we did with Podman is actually we built, since the file system doesn't support it, what we've done is we're actually choning files underneath the covers to be able to make user namespace work. So here I'm gonna create a user namespace for UID 0 to, well, starting at 100,000 to 500,000. That just created one container. Now I'm gonna actually look at it. So one of the other things we've done with, actually guys from Suzy did this, this work, when I run a container, the Docker had this command called Docker-top to show you the process is running inside of the container. So we've actually used, there's a new library called psgo that actually can do something really cool. We can actually show you the UID inside of the container as well as the UID outside of the container. And this is a brand new tool, or a brand new enhancement to Podman that basically allows you to see that inside of the container, it says I'm user root, but outside of the container, if I actually looked at that process, the pit on that process is running as 100,000. Matter of fact, I'll show you that right now. So here we see the sleep program that I launched inside of the container is actually running as 100,000. So if I looked outside of the container. So now I'm gonna run another container, and this time instead of using 100,000 for the first process, I'm gonna use 200,000 for the first process. So now I have the container, and if I look at that container, I'd see that I'm running as root inside of the container, I'm running as root outside of the container. And if I look in the system, I will see that one sleep is running as 100,000, the other one's running as 200,000. But from inside of the container, they both think they're root. What's happening on the file system underneath, we've taken the Alpine file system, we're actually churning it on the fly. And we have some really interesting tools that we're adding to make that churning faster and better, all different. Okay, so I talked earlier about Docker being this client server model where you exec a program on your file system and it talks to a server, and that causes lots of issues. It causes things that we can't do like things like SD notify, right? Everybody knows what system D does with SD notify. Basically you're gonna run a process inside of a container, and it's gonna call back to system D and say, I'm ready to receive requests. Well, that never worked inside of Docker because you can't get the SD notify is not a chat, you put the Docker command inside of a system to unifile, it's talking to the Docker daemon away on a different socket, and the process is saying, I'm ready to Docker. It's not saying it back to the Docker client, it's saying back to the Docker server. So what I'm gonna show you is Podman is actually does exactly what you think it does. So the way I do that is, does anybody know what this login UID is? When you log onto a Linux system, what happens is there's a UID, part of your process system, records that you are Dan Walsh, right? That you had logged into 3267. So this will show this on the system. It basically says I logged into 3267. I can't change that. I can become root, I can do anything on the system, sudo anything else. This login UID tracks me. There's no way for me to change that login UID once it's set. That means that the auditing subsystem can record the fact that I did something, whether I was root or different user or anybody else, but it was Dan Walsh that did it. So here we're gonna run a container. So we're running a container, and the container is basically showing you inside of the container process right now, it's running as login UID Dan Walsh, right? It's running as 3267. If I run Docker, it shows that it's running as UID, that happens to be UID minus one, okay? Because that's an init system. Docker deem and never logged onto the system. So if I do something evil on the system, it comes back via Docker, it comes back and says Docker did something evil. If I do it through Podman, it's gonna come back and say Dan Walsh did something evil. So what I gotta do here is I'm putting a watch on Etsy Shadow, which hopefully worked, even though it showed that. Oh, it says the rule already existed, which is good, cause I ran this earlier. And now inside of Podman, I am touching, I am trying to rewrite Etsy Shadow, right? And if I do this down here, I got trapped. Shows that AUID deWalsh did something to Etsy Shadow. Now if I do it through Docker, and it shows AUID on set did something to Etsy Shadow. So what that's demonstrating, from a security point of view, Podman executing a command basically tracks what the user did on the system, as opposed to Docker. And this is why I say when you give people access to the Docker socket, it's more powerful than Pseudo, because if I go through Pseudo when I do something in your system, the audit system knows that Dan Walsh did it. So it showed a little bit of Podman top features. I'm gonna run a container. I can see the SC Linux label via Podman top. I can see, this is something that no one sees when they run Docker, no one has any idea. These are the Linux capabilities that are on by default when I run a Podman. So we bought it right, shown. People are always asking, what capability? Everybody talks about capabilities. What should I drop? What should I add? Well, these are the default lists that you get in almost every container you run in the environment. If you run TRIO, we run with a lot less, right? A lot of these capabilities are there just to be able to build containers, okay? Make nodes, that's so I can create device nodes. If I'm running containers underneath TRIO in production, I'm not expecting people to build device nodes. So we take it away by default. You don't get make node. There's a couple other ones that we get rid of when we're running it. But basically there's different, if you look at running applications, there's different ways of running applications. In this case, we need more privileges because we don't know what you're gonna do with Podman. So we call it Podman for a reason. One of the things we wanna do is be able to manage pods. What I've shown you so far is basically all the CLI that matches up with Docker. But here we have pods. So if I wanna create a pod, I'm gonna use Podman Pod, I'm gonna name the pod, oh, and it's gonna create a pod in the system. That actually, the echo line there is wrong. I didn't fix that, but don't worry about it. I'll make this all available and hopefully clean it up. So Podman, now I'm gonna actually create another pod. I'm gonna create a sleep pod, but this time I'm gonna tell it, I mean, I'm gonna create a container, but I'm gonna put the container inside of the pod, okay? So this is basically creating a pod. So I created a pod, now I'm assigning a container to the pod, now I'm gonna create another container and assign it to the pod. And guess what? I'm gonna show on the system. Okay, these are all the containers I ran earlier, but basically, right now, it would've worked a lot better. I should've killed all the containers. But basically, I am about to start the pod. The two containers I just created did not create containers. So at that point, now you should see two more containers. So you've seen containers that just got created, they were created on the system, but basically they just status that goes up one second ago. So you can see, when I started the pod now, I started both of those containers. If I wanna stop the pod, now this is actually a bug. When I'm stopping a container, it waits 10 seconds for the container to actually exit if the tool catches that and sleeps actually catching it. There's a bug in Podman right now that it actually waits on the first one before it sends the signal to the second one. So we actually have to fix that so it sends the signals to all of them. So it's gonna wait about 20 seconds. So there, we did it, and we should be back to three containers running on the system. And again, it'd be better if I killed all the containers before I ran it. But basically it's running. So that shows you pods running on the system. If I wanna remove all the pods, I can actually force it to remove all the containers that are created for the pod. And now if I list out the pods, I'm back to zero pods running on the system. So that is a real quick demonstration of some of the features that you can do with Podman. So let's talk about other things. I mentioned some of this stuff earlier since a lot of people here are interested in SystemD. Over the years, I came to SystemD a couple years ago and I gave a talk about the CTO of Docker versus Leonard. And both of them were being very hard on working together. So one of the things I did when we built Podman is we wanted to have proper SystemD integration into it. So proper, you can just run a container that has SystemD and it will just work under Podman without any modifications. We support, as I talked about with login UID and demonstrated we actually can do proper SD notify. So we run a container that container does SD notify. That triggers all the way up into Podman. That reports a SystemD that the container is up and running and ready to receive actions. We also do socket activation. So you can actually put Podman into a container, Podman into the system and have SystemD automatically fire it up and it will pass down the socket down to the process that's running on the system. So all the things we wanted to do with Podman is we wanted to have some kind of, right now it's written in Go. We wanted to have a interface to it that people could use other than Go. So we added a remote API for it. So we wanted to basically say, well, I can run Podman, I can set up a SystemD socket activated Podman instance and then have a API that talks to it. So we actually decided to use Violink. So Violink is a API communication tool or library that we can use to actually do socket activation and communicate to a Podman running on the system. It allows you to socket activation of Podman. And here's a unit file that we provide. So if you install Podman, you will get a Podman, it's actually IO Podman service file, create a socket file and a service file. If you enable those, then you can start to run remote commands with Podman. So here we have a Python, we also provided a Python library that you can build Python programs to communicate with containers. This is showing importing a Podman and it's listing, I think it's dumping information. So this is dumping the information about Podman on the host. But basically it's a full API that you can communicate via Python to it. We also have, so we built a program called Pi Podman that does all of the Podman commands, but it's already in a Python. And the Python command talks to the Violink to the server and so you can do all of Python. Why do we write Python command to do this? Because we wanted to do Python on different operating systems. So here is a demonstration of using Pi Podman. Hopefully this, you're not gonna be able to see it, but basically it's gonna run all the commands on the system. So it's basically doing Pi Podman there to list the containers. It's showing you some of the config information. Showing info, that's the Podman info command that I wrote. So this is doing it all and rather than, since most of you people can't see it, the last step of this is it actually shows this is all running on top of a Mac, okay? So this is running a Python script on top of a Mac that's talking to a virtual machine that's running Violink into a Podman client sitting in a system D unit file. So we built this protocol to be able to talk from the Mac to the server and we called it SSH, okay? So we're taking advantage, basically we built, we're basically taking advantage of SSH on a Mac, or on a Windows box or a remote Linux box to actually talk to Violink over the system. So if you have, all you have to do to make this work is actually use SSH, set up SSH to communicate between the two boxes and it'll work. We're adding cockpit support since we have clients, so this is for stuff. So we're adding Podman support into cockpit. Sadly, I have it running on here and it didn't show any images, so we're not gonna show you that, but basically I have someone, Martin Pitt's been working to help us get this working, but basically we're getting full integration between cockpit and Podman and in this case we're using Node.js to actually talk to the Violink protocol to Podman running in a system de-unified. Now, I say no big fat demons, I'm kind of cheating a little bit, but the thing here is that Podman is just running for the connection that happens, okay? So we're firing up multiple Podmans every for every single connection, so we're not doing any kind of monitoring of multiple containers or anything like that. So what don't we do? What we don't do, okay? Right now, so I've talked about us replacing Docker, but the stuff that the Docker demon did that we're not doing inside of Podman. So one of the things we don't do is we don't do auto-restat, okay? Because this is a thing called SystemD, that does a real good job of auto-restat. So if you want a container that's gonna be restarted if it fails, it blows up, you put Podman inside of your Unifile and it just works. We don't do Docker Swarm. We work on Kubernetes. If you wanna orchestrate lots and lots of containers, use Cryo on top of Kubernetes. We don't do Notary right now, okay? But we'd be very willing, if someone wants to open up patches to make Podman work with Notary, we'd be very willing to do it. Red Hat's probably not gonna put the engineering into doing that. We don't do health checks yet, okay? Health checks could probably be done as a sidecar container or they could probably be done as a SystemD Unifile or like a Unifile we create. Basically, a health check is supposed to come up periodically and make sure that the container is running properly on the system. But we haven't quite figured out how we're gonna do about that. We don't do the Docker API. So if you have tools that talk the Docker API to the Docker daemon, we don't have a tool for that. Lastly, we don't do Docker volumes yet. So Docker volumes, we do most of what you think of regular volumes, file system volumes, but there is people that have built Docker, daemons for the Docker daemon that do volumes. We're looking at doing that very soon. So that's planned on the roadmap to be able to support that. This point, this is the real point where we ask questions. Anybody have questions? Yes. A lot of people have questions. You're gonna be running around. So first thing in terms of the implementation of Docker API, there is one reason for doing that. Maybe not in Podman, in software, we can just provide some wrapper on top of the var link API. Because without that, I don't have integration in all of the IDEs, IntelliJ, whatever. And without that, I am unable to get rid of the Docker from the developer's workstation. What do you need? So you're talking about twist lock and stuff like that? No, no, no. I mean the idea from the IntelliJ or PyCharm or whatever, or Visual Studio Code, every ID for the developer has integration. It's all talking from to a root process, launching a root process. Yeah, we'd be willing to accept if people wanted to build something like that on it, it probably would not be usually complicated. I understand, it's not the scope of the Podman, but it's just an idea of wrapper. That's one thing. And the second thing, we get rid of the Docker on almost all places except one Docker file. Right, so Docker file, we support the Docker file format. I might do, I'll sign up for a lightning talk later and I'll talk about Builder, so I can do a five minute on Builder. But basically right now Podman only supports Docker file format, Builder supports anything you want. Yeah, for the Builder, I perfectly know that I can just write simple shell script and create whatever I want. But not everyone should write shell scripts. Right, but the goal of Builder is not to, it's to basically allow you to build another tool like Ansible container or a source to image and other people to build tools that don't have to generate a Docker file in order to build an image. But it would be nice to have one standard. Let's let other people question that. I'll talk to you forever outside, okay? Just yell. Does Podman support a union file systems like overlayFS? Yes, it runs on, right now it supports overlayFS. We have a bug on DeviceMapper back, so it supports overlayFS by default. We have a bug in DeviceMapper right now. DeviceMapper was built under the concept that only one process would do it. But we have ideas of how we're gonna fix it. It runs on top of ButterFS and I don't know if anybody's tested it. Also VFS, so VFS layer. And someone's built an overlay file system for. So when you're running in a non-root environments, it's actually using VFS because you can't use overlayFS on it. One of my engineers wrote a thing called FuseOverlayFS that actually works real well for non-root. So we're actually introducing that, too. Anybody else? Yeah, you gotta wait. One, two? Yeah. One of the reasons I've still been using D mostly for decompose. Yeah, so Docker Compose, I didn't put that one up. Seems like it could. We've been asked about that and again, I'd be willing to accept it. We're thinking of, that's on the road map to do some kind of compose. Whether or not we support the Docker Compose language or we support a Kubernetes Compose language or we support an OpenShift Compose language. Again, Podman's a fully open-source project. So anybody that wants to contribute is welcome to contribute and we will take patches for that. But right now we don't have Compose yet. Anybody else? Oh, up here. It's actually plus one for the API because Ansible runs via API, then the word service API. So without that. So Ansible? Yeah. Well, I mean, you mean Ansible Container or something? Ansible Container. Yeah, Ansible Container, they're looking into moving directly to Builder for support for that. So to basically get out of the Docker team. I mean, right now everybody's, in my opinion, everybody's basically put the Docker socket all over the place. The Docker socket is one of the most dangerous things you can do. As soon as you give the Docker socket, you give full root to your system without having any tracking. And most of the time you're doing that to do Docker builds, right? And so usually you're basically giving your people the ability to create a Docker file and then build it into an image. Well, I just showed you that you can run, matter of fact, you can run Podman non-root with Builder inside of a container and actually build the Docker image and push it to Docker I O. And I just said Docker image, so I owe money, okay? That container image, I blew it. That's, I think the first time I really screwed up, okay? Anybody else? All right, that's probably, I'm out of time anyways. I've been, good.