 I'm glad to have the coveted last slot of the day because this is the one where everybody is totally burnt out and I'll try to bring you guys back. So today my name is Dan Walsh if you haven't seen me speak before. I lead the container team at Red Hat and I've been at Red Hat for about just over 18 years. So I've been around forever and as I said I lead the container team and we'll be talking today about Podman and about replacing Docker with Podman. I have a constant tickets at 7.30 tonight in Lowell so we're going to try to make this as quick as possible okay because it's going to take me a while to get if you don't know Massachusetts Lowell is not close and getting out of the city at this hour. So let's start out with the presentation right now this next slide. So the first step you have to do is your DNF install Podman or you can take notes if you want. Okay the next thing you do is alias Docker equals Podman. Okay next any questions? Okay I guess I'll make the show next slide. So back a year ago back in May of 20 May 29 2008 a guy named Alan Moran tweeted out on Twitter I completely forgot that almost two months ago I said alias Docker equals Podman and it has been a dream and then he uses my slogan no big fat demons and so down to here Joe Thompson responds to him and said what reminded you that you had done that and the response by him as I executed Docker help and of course Podman help came up on the screen. Next slide. So at this point I usually like to get everybody involved so everybody please stand up. Next. So please read out any text that is in. So I like to think of myself as the Ankiyoti and someone actually saw this the first time I put up this slide and said no it's Don Kiyoti but that's the joke. In case I have to explain these things out to you. You know I've been trying to actually I came in how many people were at the last presentation. Okay in the last presentation. Oh no not my last presentation. Last presentation the guy up here talking I came in the last few minutes and he must have said Docker images, Docker containers and every time he says that it just sends shivers down my spine because it's like and I have to control myself and say yeah you just mean images you just container images you just mean you know these are just OCI images. So next slide. So let's talk about what it means to be a container right so I like to level set what it means to be what do what does it mean when I want to run a container so I want to run a container so just that statement what does that mean well the first thing it means run a container you have to identify what is a container or what do you mean when I say I want to run a container. So most people when they say they want to run a container they're talking about something that exists at a container registry so they're gonna say I want to run NGNX or I want to run Apache or I want to run Fedora or I want to run Alpine and really what they're talking about there is this is a container image okay and a container image happens to be a basically you create a directory on a disk and you put some content in that directory and that content looks like root okay it's called a root FS okay root file system basically if you win there you think usually you see things like user slash bin and slash Etsy and things like that so it looks like root on a Linux operating system. The next thing you need to do is you tie that up into a tie ball okay and then you create a JSON file and the JSON file basically describes what's inside the container okay what's inside of that tie ball usually it's things like entry points okay what is the executable that I'm going to run when I run the container environmental variables maybe the working directory that you're going to use and if you whether feel some labels that might describe the content what is the licensing things like that but there's like six or seven fields and anybody that's ever played with the Docker file you see those things like the working directory the entry point the command those fields those are all the stuff that gets put into the JSON file and this is the thing that Docker developed the real breakthrough in Docker was they designed this container image basically a way to take those tie balls put them onto a web service and allow you to pull those down and install them in your box and run a container on top of that well the company called CoroS came along this is before Red Hat purchased them and they basically wanted to standardize the content of that JSON file right they wanted to basically say what did the fields inside of that JSON file mean and the location of the tie ball and they can't wanted to basically standardize that because they wanted to make sure that no one company controlled the standard of what was you know this revolution of container images are going on anybody here but here but company called Microsoft a few people raise their hands so Microsoft before you know we used to hate them and now we love them but the relationships changed but Microsoft believe it or not back in the day people used to send emails to each other with this format called dot doc okay and the way you would write documents you would write dot docs and what happened is Microsoft and their wisdom would change the format of dot doc on every single release so when you have Windows 95 you had dot doc Windows 97 you had dot doc and guess what had if you wrote a document in Windows 97 and sent it to someone on Windows 95 they couldn't look at it but they could if they spent money at Microsoft to buy the latest software to be able to look at those documents so every single time Microsoft released a new version of dot doc you had to upgrade your software it was a brilliant marketing screen okay but that's because one company to say not only that but the open source competitors and other people that wanted to look at dot doc suddenly would be broken as soon as the new version came out and then they'd be scrambling to try to figure out why it was in the new dot doc so you really never want to have one company control a standard interchange format like like dot doc and so Cora West wanted to standardize on this so that just one company wouldn't control it and they came out with the thing called the app C spec application container specification and they threw it out there and said we think we should standardize in this format and suddenly the container world where everybody was rejoicing that we had a new way of shipping software on Linux and it was a tower ball on this JSON file and suddenly there's a second type of JSON file right there was a second specification at that point all the vendors got together and said we can't have two ways of shipping this container image and we want to standardize in one way so we have to form a standards body and they did this form the standards body called OCI open container initiative okay and the companies that are involved in that were Red Hat, Cora West, Docker, IBM, Google, Microsoft and a few other companies okay all get together and form the standards body that basically specified what that container image that sits at a container registry was going to be and that's called the OCI image bundle format so when I want to run a container now I have a way of having a standard definition of what it means I want to run a container the next thing I need to do when I run a container oh sorry quick segue a couple months ago actually back obviously wearing a somewhat winter coat I went to a restaurant in Boston a takeout place and I was wearing this jacket and the jacket because I contributed to the open container initiative OCI and the lady the lady asked me at the counter said that if I was into having open containers of alcohol on the streets and I had to explain to her that OCI open container initiative was something different than that so anytime so just remember this picture right here okay they're not the same thing next okay so now I want to run a container so I need a way of getting that container image from the registry to my host and most people if I asked you how do you get something from a container registry to your host you would say you would say the dreaded D word right you say D pull and so we a few years ago we opened up a pull request with Docker rank to our upstream Docker at the time and we wanted to basically pull down just the JSON file associated with an image so what we want to do is just pull down that JSON file so we can look at it because those tab balls can get pretty big I mean those tab balls to get two three four gigabytes so if you wanted to pull down just look at an image onto your host you'd have to pull down seven gigabytes then look at the JSON file and realize and that's really not what I wanted now you have to get rid of it so we thought it would be really nice just to pull down the JSON file or the specification file and we opened a pull request Docker and they rejected it because they said they didn't want to make the command line interface any more complex but they said it's just a web service just pull down the JSON file yourself building on basically do curl a fancy version of curl so we started working on a tool called scopio and scopio basically implemented that protocol but Antonia Murdocker who's one of the guys here actually invented scopio and he basically continued working on it so scopio became a mechanism for moving images around so copying images from different types of storage from registry to registry to the host and everything else but so so that's the symbol of scopio there but scopio eventually became was split into two pieces so we wanted to have a library just for pulling images around and then scopio became the CLI on top of that and that library is called github containers image so github containers image has all the technology into available to interact with remote registries and local storage so so that's a library that we now use for our container tools next so the next thing I want to do with the container I identify the container I have ability to move that container to my local host I need to install that software that software on disk well when I talked about the container images I actually missed that I didn't quite define the whole scope of what it means to be a container containers are layered okay the think of a wedding cake very traditional American wedding cake we have these layers you might have a base layer so it's sort of like the floor image and then I might have a patchy installed on top of that and then I might have jboss installed on top of a patchy and then I might have my app on top of that so the way that works is I have the the the original tab all that gets exploded on disk now I need to put a copy on write file system on top of that explode in the Apache content and then I put another copy on write on top of that explode the jboss app and I end up with like sort of a four-layer wedding cake and that the mechanism in Linux to do that is using this thing called copy and write file systems and we built a library to do that and that's called get up containers storage and so container storage allows us to take a whole bunch of layers from a container registry pull them down with containers image and then store them on disk and basically create that root FS that we need to finally execute the application so get up container storage includes things like overlay file system device map of file system butter FS so there's different ways of doing this and that's all I'm built in time inside of the library container storage last one so the last thing I need to do when I want to run a container is I need to basically basically assemble the container and run a program to configure the Linux kernel to run the container but when I want to run a container this there's really three different people involved three different ways of getting data about what I'm going to run a container so we have the original container image up here which has the JSON file that describes the entry point and basically environmental variable she want by default the second entity that it gets involved in running container is the what I call a container engine okay Docker would be an example of that but a container engine is basically has some hard-coded standards for what it means to run a container things like what se Linux labels are going to be associated what are the second rules are going to be what namespaces are going to be used and that's all sort of hard-coded into the container engine when I want to run a container the last person is actually the human being or the container orchestrator they have input into what it means to be around a container so anybody that's ever run podman commander a Docker command has basically done things like you know dash dash v you know dash v mount this and dash ti or I've dropped this capability all those commands you put on the command line so we have input from the human or the orchestrator we have basically the hard-coded standards of the container engine and then we have the image all of those three inputs get combined together and guess what they created another JSON file that JSON file describes what the user intends when he wants to run a container and that gets dropped on disk and that got standardized as well in OCI and that's called the OCI runtime specification so this has the se Linux labels the namespaces the capabilities the C groups all that data that gets assembled into it gets dropped into an OCI runtime spec and then the container engine launches a tool to configure the Linux curl to run the container the default implementation of that is called run C every container engine in the world right now by default uses run C Docker uses it all the tools of redhead ships use run C so basically at the last step everybody drops an OCI specification then executes run C which reads that data now there are other other run times that out there like C run as well as Cata containers G visor but basically they're all OCI and this was all standardized so we have the ability to identify a container image pull it from the storage pull it store it and then run it next slide okay I forgot about this one so that's fine you can go for it so we also have to be able to set up the networking so the networking when I want to run a container and that actually was standardized also by core OS and core OS developed this thing called CNI container network interface and that allows us to have different vendors come in and give us different isolation networks so different ways of setting up virtual private networks like flannel weave a lot of vendors have been permitted some of these the very last thing we need is we need a way of monitoring a container so when one seed one seed just goes out to the kernel and sets up the container and basically does a fork and exact and the container is running on the system but we really need a process that sits out there and watches what the container is doing basically holds basically waits for it to exit so it can get its exit code did it fail did it not fail it holds open its logging so when it writes logs it's going to write this there's a little C program C program called con mon container monitor that sits out there and watches the container holds open is standard and standard out and standard error and that allows us that's the tool that we can connect back to to get back to the container so each container has a con container monitor running with it next slide so what don't we need when we're running a containers and this is my big you know another Don Quixote thing we don't need a big fat container demon okay we don't need a demon running his route on the system that everybody shears together right this is what this is one of my fundamental problems with the darker demon is that here we are six seven years into a container revolution and the only way to run containers is a big fat running demon that everybody shears and it becomes a bottleneck for anybody doing any type of evolution of containers okay so all the tools we're going to be talking about the pardon me a tool we're about to talk about doesn't have any demon and we're going to demonstrate that next so think about darker is basically this one and all encompassing tool that does all the features but think about what what we want we want to be able to run containers we want to be able to build containers and we want to move containers around so what really work a lot of the effort of well what I've been working on our teams from working on is basically to take what was done in one big fat container demon and break them into individual components to be able to do that on the system next okay that shouldn't have been there we're on presentation okay so so introducing so one of the tools we built you know originally was a tool called podman and podman is a tool for managing pods and containers pods are a kubernetes concept basically a pod is a group of one or more containers all running in the same group of namespaces same C group environments so podman stands for pod manager but it also just manages containers but when we were designing podman what we did is we knew that you guys are going to google how do i do this for the container and what you're going to be faced with as soon as you say that is you're going to get a docker command line okay when you say how do i mount a volume how do i turn off sd linux how do i do this and again how that's all going to come up with a docker command line so we decided to just take advantage of that command line and implement it itself so next so to list containers next to run containers to exec into a container to look at the images container so basically we copied the docker cli basically uses the same commands and that's why we say you can alias but this is this is too boring to believe so i'm going to go in and actually demonstrate it live is that big enough is that big enough and i'm not supposed to be rude okay so let's start this up this is when i type my password on the screen okay so this is uh by the way all of our demos any demo you're going to see today tomorrow it has to anything do with podman or builder or anything else are all at github containers slash demos so the script i'm running you can take it home run it on your own laptop gets set up and it should all work so right now i'm going to run podman version looks very much like docker version but podman podman info is the command that i find much more interesting so podman info basically dumps out all the information about what's going on when i'm running a pod but there's a couple of interesting things here we look at the the bottom part of this i wish i had a display but i can maybe i'll do it this way so we're looking at the store here so it shows you know where the store configuration is but down here we show you that we're running the overlay driver on the system and it shows that the storage is being stored in by live container storage but it gives you an idea of what's going on but let's go up a little bit so this is an interesting section right here this is registries so one of the things that we did not get along with docker about was docker wanted to hardcode everybody in the world to use docker io okay and we felt right from the beginning that we wanted to allow multiple registries okay to allow you to store your images wherever you wanted in the system um and so we um right from the tools all of our tools support yeah concept of multiple registries so in this case you see that i have docker io fedora project quay dot io red hat one sent us one i'm all set up we also allow you to block container registry so if you want to be able to say i don't want anybody pulling from red hat i don't know why we need to do that but you know you could block anybody pulling images from any any of the individual registries so let's let's go in so uh one of the things you know again we don't have any demons running on this thing but one of the interesting things we can start to do with containers um and you know i gave a talk yesterday on builder and builder is another tool in the in the same group uh one of the things we can do is they actually can run containers inside of containers so this this example right here is i'm going to run podman command and inside of it i'm going to be running um while it's running a build a bud so this is actually going to be a container building a container image inside of a container lockdown okay and this is the simple docker file we're going to use to do it it's pulling down an image b u has a very nice network to run this with so i pulled down an alpine image then i built an image on top of it and i called him my um my image so i pulled down this image built on top of it and that all happened within a lockdown container so you can imagine running containers say inside of a thing like kubernetes having distributed container distributed kubernetes building hundreds of container images so imagine your c i c d system and this is without leaking any docker socket or any access to the host system we're totally isolated these containers to be able to build containers so at this point we're going to remove the containers from the system and that's really cool but they're really just looks like what docker can do right yeah that's somewhat interesting building containers inside of containers but the really cool thing with with podman now is i can actually run it rootless so the first few commands i was running with sudo in front of them so this now i'm running these commands without sudo so i'm actually pulling a container image i just pulled down the alpine image into my home directory and i just ran a command so that actually ran a container i just listed the top level of the alpine image but it basically ran a container in my home directory with no privileges at all so this is a standard user account there is no additional privileges going on i can show you the images the containers inside of my home directory here's the images inside of my home directory now i'm going to show you the images on the root system if you notice quickly they're different so basically my home directory storage and you know this container images are being stored directly in my home directory and that's different than what's being stored on the root system so let's take a quick look at how we're doing that what is the magic of running rootless podman or rootless builder so what we're doing is we're taking advantage of user namespace so username space is this really cool feature in linux that almost no one's ever used before and podman is probably the first tool when you start to play with it that you're ever going to use it with so username space allows us to configure these files at cwad and at csubgid and i'm going to show you what they look like so here we're going to show this is at csubgid and actually you actually have an entry in my my version but basically so here's my account so inside of at csubgid i have a dwallsh line and this line has dots at 100 000 and then allocates the next 65 000 UIDs into it so out of the box in rel 8 and in fedora ubuntu right now shadow utils every time you add a user allocates 65 000 UIDs to the user and as you see i did an actually account afterwards and that started at you know basically 100 thousand 65 and then the next 65 000 UIDs so now i'm going to execute a command builder on shia but there is a podman on shia i just got to fix the uh so i just became root on the system so inside of this let's take a quick look at my account and if you look at my home directory here i have all the directory all the files inside of my home directory are owned by root now i claim to be a security guy does that seem like a smart thing to do in my home directory all right you know am i running as root well i am in this home directory right now but if i exit the container and i go over to here basically same exact directory and look at the directory here you'll see that everything is owned by dwallsh so what the magic that's going on here is i've entered a user namespace and i'll show you what the kernel basically says about this shows you this mapping here so down here i don't know if maybe i'll raise it up a little bit so up here it basically says that my UID happens to be 3267 so inside of the user namespace podman or builder is mapped UID zero root inside of my namespace inside of my container is going to be mapped to 3267 for a range of one UIDs then it looks to etsy sub UID and says starting at UID one map 100,000 and then the next 65,000 UIDs so this container this user namespace now i have control over 65,537 UIDs and that's what's allocated to my system my system so let's look at what i can do with this um so what i can do here is i can actually make their foo and i'm going to touch the foo bar and then i'm going to do a chone of bin colon bin of foo okay so now i've created in my home directory a foo bar directory and i have it owned by bin so now we're going to exit out of the username space and i'm going to do an ls of l foo let's go let's go up here and it created a member i did i choned it to bin bin which bin inside of etsy password is actually one one so it actually created a file in my home directory owned by 100,000 and group 100,000 if as a user here i try to remove that that file i'm going to get permission tonight because my user account my user that's not in the username space does not have access to those UIDs so this user even though it's you know i created the file using my account i'm not able to delete it now if i go back into the username space or the container i can actually remove the file so now let's okay so that's that explains a little bit how podman runs is rootless right taking advantage of that but let's look at some other powers of of username space and podman is really the tool the first tool to take full advantage of username space so what i'm going to do right now is i'm going to run a container now i'm back to running it as sudo but in this case i'm going to run it as sudo but i'm giving a UID map so i'm going to say run the sleep i'm just running the sleep program inside of a container and i'm going to run it with a UID map a username space map starting at 100,000 and i'm allocating 5,000 UIDs to it i'm going to run the container and there's a really cool feature of podman one clever crowd yelled out 100,000 when they started but you guys weren't clever and basically podman top has the ability to show me what's going on inside the container as well as what's outside the container so that user h uses says show me the user inside the container and show me the host user so that means i'm running as root inside the container but my UID really on the process is 100,000 if i look for the sleep program on there you'll see it is running as 100,000 down here now i'm going to run a second container but the second container i'm going to run starting at UID 200,000 okay so the first one was running as UID 100,000 second one's running as UID 200,000 there's process running on the system if this container breaks out of the containment and gets onto the host it's going to be treated as UID 200,000 so if it attacked the first container was running as 100,000 you'd have standard UID separation in that you know if you ran multi-user systems over the years the most basic of linux security means that those UIDs cannot attack each other right they're they're prevented from attacking each other by UID protection so let's look at another thing so there's a there's a thing in the operating system it's been there many years called logging UID how many people in this room have ever heard of logging UID that have not seen me talk before one person back there has seen of the herd of logging UID well logging UID was actually added by for the United States government okay and basically for the auditing system so it's all part of the standard it's auditing I gave the same presentation in front of Department of Defense people who required this to be put in and I had about as many people have responded to me that they've heard of logging UID as we just had here sadly so what logging UID actually does it's really kind of neat is when you log into the system there's a field in the kernel that says that this process is going to be owned by that user so when I log into the system it says my logging UID for that process is thirty two sixty seven from that point on no matter what I do on the system it will be recorded so here I am running pseudo so I'm becoming rude then I'm in that executing another program podman that's launching a container it's catting prox self UID inside of the container and it comes back and says thirty two sixty seven to this okay now I do the exact same thing with docker and docker comes back the exact same command and docker comes back with this huge number okay that huge number there happens to represent minus one in a 64-bit operating system that means that basically any process when you boot up a system that is not started by a user that logged into the system will have a logging UID of minus one okay basically says that no user ever started this it was started by the boot system so in this case why is that important well I'm going to turn on auditing and that audit command right there tells me to watch etsy shadow I'm watching for anybody modifying etsy shadow now I'm going to simulate breaking out of a container but I'm going to do a podman run privileged and then I'm going to use a dash v that mounts the host operating system at slash into the container and then I'm going to touch the file host slash etsy shadow which is the host etsy shadow file now if I look at the auditing system to see who modified etsy shadow it comes out and says dwalsh modified etsy shadow so the something in the audit log is the security of the system is basically saying Dan Walsh is the one that modified the etsy shadow now I'm going to do the exact same command with docker and docker says unset modified etsy shadow so if anybody's ever seen me I always tell you that access to the docker socket is the most dangerous thing you can do in the Linux system okay it's more dangerous than giving out the root password to the system and is more dangerous than sudo without root that is because when I log on to the system I can do things through the docker daemon that it cannot be recorded and not audited and when I'm done dealing with the docker daemon I can actually destroy the container and destroy the fact that I Dan Walsh have a talk to the docker daemon that can get eliminated from the log files by default just by destroying the container that did it so a couple we showed a little bit of podman top features before I'm going to show you a couple more I showed you a host user and host PID I mean root user and host user inside the container now I'm showing you the PID namespace so this says the PID inside the container for that sleep program is one but it's really that process ID on the system if I want to see if etsy linux is affected and what the labels are associated with the container I can show a label I can show seccomp show you whether or not it's running with seccomp and the last one is is this thing this idea of capabilities so the ability to break the power of root even though it's a root running inside the container I've actually taken away of a lot of the power of root and these are the default list of capabilities that are left on and these were standardized by docker okay these are things that are hard coded into docker and that nobody ever knows about so these are the default capabilities that all containers run by default if you were running with cryo we were in with a much smaller subset of these capabilities and some of these have some funny historical significance of why they were done this one right here I think is shockingly bad and really I should turn it off in podman but we wanted podman to have the equivalent security of docker have the you know so the quick capabilities so this one right here audit right that allows a process in the container to write to the auditing subsystem do you know why that's there because when people first started dealing with containers they wanted to put ssh demon inside the container right because they believe that you would ssh into a directly into a container and turned out ssh demon needed to be able to write when I log into a system it records the fact that dan walsh logged in and that has to get written to the auditing statement so it needed the audit right so someone at docker said oh well people are running ssh demon we might as well just turn on the ability to audit right now almost no one in the world turns on that but we still have that as a default turned on another interesting one down here is called net raw net raw is actually very dangerous to allow on in containers net raw allows you to create ip packets in any form and send them out on the on your internet device it's been shown the ability to do that has broken some of those vpns that I talked about earlier the things from cni do you know why that's on so you can ping so you can create an icmp pack well there's other ways in linux to send out icmp packets but we went with that default a very insecure fault the fault just so you could be inside of a container able to execute the ping commit so some of these things are just like why do we have these lastly this one drives me crazy and that's make node that allows you to create device nodes while you're inside the container now we have things like the name the device c group and other ways of controlling what you're able to do but but if we could just eliminate that and just make the container engine provide the device nodes we could actually run the containers more secure but those are things you ought to think about and if you ran cryo if you run under kubernetes cryo we eliminate that we eliminate make node we eliminate net raw and we I think we eliminate sister root there's some weird ones that you know it's just all about running containers of production but at least we reveal to you what you're actually running in containers so the last thing I'm going to show you with pods pod man is we talk about pods and so here I'm going to create a pod and what a pod is is again one or more containers so I executed pod create and I created a pod up there and then I added a couple containers to it and so what I can do with that is I show you have no containers running and now I'm going to stop the pod so when I stop the pod it actually goes out and actually creates two containers so the container started up simultaneously when I created the pod so it's a way of associating multiple containers to the single namespace and I could put a hundred containers in there if I wanted and then if I stop the pod it shows you that there are no containers running on the system anymore so that's the end of the demo let's get back to quickly to the presentation okay so pod man pod man came out about a year ago um year and a half ago now and rel 8 just came out and we decided in rel 8 to drop all support for docker so if you want to run containers oci containers from the command line inside of rel 8 you'll have to use pod man and our other tools and so up here inside the documentation docker is not included in rel 8 for working with containers use pod man build a scopio and run c tools so we basically red hat has moved away if you want to use docker you'll have to go to docker rank to get docker for now on so there will be no support for that so let's take a look at you know some other advantages of pod man it has proper integration with system d basically you can run system d inside of a container this was always a sticking point with docker they never liked the idea of running system d inside of a container but it turns out that that's a fairly common use case so we realized when you're running system d and we will set it up so that it run by default out of it we support sd notify this has to do with the fork exec model in the docker world it always was hard to get better integration with system d because if you ran the docker command line inside of a unit file that actually was talking out to a different demon that was running in a different unit file to actually run the container so things like sd notify sd notify is a way for the processes inside of the container to notify system d that they're fully up and running so for instance if you're running a database and it might take a minute to actually get loaded before he wants to get any calls from anybody else there's a there's a way to tell system d that i am fully up and ready to process data and so that doesn't work with docker but it will work with pod man socket activation so the ability to basically have your service not running but if a process if a a packet comes to a circuit socket then system d will kick off your container and the container get handed the socket to run pod man also has a full remote api it's called biolink and this allows you to run remote commands against pod man's it also allows us to build python bindings so we can have you can actually write python code that will talk to pod man and actually execute pod man commands via a biolink process that's what the remote api support is we're also just now releasing what we call pod man remote so this is a basically the exact same pod man command line but instead of acting locally it actually talks remotely to another pod man service so basically it can go to a remote machine over ssh and launch a pod man service on there and have it execute the command so from a one machine you could do a pod man images and run another machine basically you know read the images from that and send it back over an ssh connection back to it so why is that significant obviously we're the the idea here is we want to get to mac and windows support so there's now a mac version available of pod man that can talk remotely to a linux box my containers are a linux concept so you have to even though you're sitting on a mac box you can talk to a linux box and have it run the pod man commands but give you the feel that you're running them locally even though it's it's running on top of say a vm or inside the cloud we have full cockpit support for pod man cockpit is the the GUI interface or a web interface into managing fedora rel just about every operating system now use supports cockpit so it's a web interface to manage a linux operating systems and it uses no j s that's talking to pod man again over the violating socket so what don't we support so the certain things that we can't support in pod man certain things we don't we don't support autostack so there's a concept in darker that you can basically say i want this container to autostack when the darker demon stops well we ain't got a demon so there's no way for us to autostack well we do have a demon it's called the init system so our services that run in containers will stop the same way every service that runs on a linux system will stop you put the pod man command in the unifile and system d at boot time will start up the service we do have restock capability so if the container goes down for any reason you can set it set it to restart we do have a command shouldn't say propose there is a pod man generate system d so we can take out to execute a few containers of pods in your system you can actually say generate me a system d unifile that will run the container that's currently running on my system so you can do it we don't support docker swarm so there is no pod man swarm right we're in the kubernetes camp we believe kubernetes won the orchestration war so we want to integrate fully into kubernetes we currently don't have any support for notary so we do support i think with concept we call simple signing which is basically gpg keys for signing images we have nothing against notary it's just not in something that you know my team is willing to invest if pod man is a fully open source project if someone wants to build notary support open up pull requests to get notary support in we would we would totally take it but as of this time no one has asked us no one has opened up a pull request to get notary significantly we don't support the docker api so there is no and we have no plans to support that we do have vialing for our remote api but you know there is no pod man socket for people who want to talk doctor from a doctor client to a to a pod man we do have partial pod man volume support but that's getting stronger all the time we do support all the standard volumes for mounting it but but docker had this concept of what they call docker volumes and these are like advanced demons that docker would interact with interact interact with we will we are building that and finally we don't support docker compose there is open source project right now called pod man compose so they're actually supporting it the problem with docker compose for from our point of view is it's just way too big right there's way too many different ways and if we had support pod man compose then we think we'd get an ember lasting list of bug reports but we do have the concept we want to plug pod man directly into the kubernetes world and so we want to be able to we have pod man generate kube and pod man playcube and the goal here is to take a few running containers that you made set up with pod man and you can take those and generate kubernetes yaml files based on that and take the kubernetes yamls and basically inject them directly into open shift or into any kubernetes environment and have your container go in from sort of the traditional way you did it with docker into a full open shift so make it easy for you to do that ought to take the yaml again from kubernetes world and move it back into pod man there is a lot of effort going on in the ansible world to integrate with pod man there's a tool that's actually called ansible pod man which is all about integrating with pod man so at this point that's the end of my presentation um here's a whole bunch of links this mailing list everything else of pod man dot i o should be a central source for getting information do i have any questions and i got about 10 minutes before i have to start running to that i see the beach boys because they might not live until i i got back any questions everybody thinks this is awesome okay yes uh well we're finishing the there's there's a few main um challenge we want to finish up the windows and mac support to get full pod man boot to pod man um support we want to get coro s involved in that so we could talk directly from pod man on a mac to a coro s instance and get that to be seamless right now you can actually install pod man but you have to set up the ssh connection you have to have the remote host running it we want to make that more seamless uh c groups v2 support right now pod man root list doesn't support c groups that's because c groups v1 can't be used that way so we want to move to c groups v2 um any other major things you guys shout out okay go ahead how far do you want to go and support any like docker and cube play and docker to generate well i don't do anything with docker but uh pod pod how far do i want to go yeah like how many features like we i mean there's not many features we would we would deny now again we're not building pod man into a kubernetes competitor but our goal is is to be able to take kubernetes stuff and allow you to play around with it in sort of the traditional container on the host world and move it back and forth but pretty much we we want to take any kubernetes yaml file and be able to translate it into something that would run locally and then to take our stuff and translate it out now there's there's other things that kubernetes does that we're not really you know we're not gonna you know duplicate we're not going cross nodes we're not you know dealing with any of that stuff uh but would you support would we yes i mean if you open up a pull request we'd be very uh welcoming to it okay thank you anybody else okay so i'd like you all to go home download pod man onto your favorite operating system on rel seven seven uh we'll finally have support for root this pod man but that's support for root seven six has support for root root running pod man um that was more about updating the operating system to have all the features that we need in it um and rel eight out of the box does fedora runs on a bantu debian just about every distribution out there right now has pod man on it okay thanks for having me and i'm running