 Hi, guys. So today we're going to be talking about replacing Docker with Podman. So here's the first step. You DNF install Podman. Next step, alias Docker equals Podman. Any questions? We wait till the end for questions. OK, so this is my favorite tweet ever. So up here we have, I don't know if this thing will work, but Alan Moran said, I completely forgot that two months ago, I set up an alias of Docker equals Podman. And it has been a dream, no big fat demons, project atomic. And so basically, he set it up two months ago. And down here, Joe Thompson asked, so what reminded you? And then he said, I executed Docker help. And all of a sudden, Podman help came up. So that's sort of a proof point. And believe it or not, that happened back in May. That tweet went on in May. At this point, if you've ever come to one of my talks, you know I make you guys do stuff. So everybody up. So please read out loud any text that is in. What a nice job. These might be all US things, but basically containers is a Linux thing. Docker was awesome. They created the idea of this mechanism for storing software inside of basically a top all of some JSON file. So when I talk about a image or a container image, we're really talking about you create a directory on disk called the root FS. Then I put some content in it. A root FS is called that because it sort of looks like root on a Linux operating system, right? Usually, if you go into that directory, you have things like slash user and slash bar and slash root and slash home. So it's a root FS. You put your content into it. And then you create a JSON file associated with the image. And the JSON file has fields in it that basically describe what's going to be in your image. So it's things like the environmental variables that are required to run this image or the entry point, the working directory, things like that are in the JSON file. And then you tie that up together and you basically take that stuff and you put it in a container registry. So CoreOS a few years ago, that's what Docker really invented. And what they did is they put, so Docker sort of controls the image format. But along came CoreOS and they had this tool called Rocket. And what they wanted to do is they wanted to standardize on the image format. They wanted to basically define what goes in that JSON file. What are the fields in that JSON file have a standard? And why do you need standards? And anybody that's ever been around in the computer industry for the last 20-something years, or in my case, 150 years, you remember Microsoft. The guys we now like. But traditionally Microsoft back in the 1990s, they controlled .doc. So basically a standard document format that everybody was sending around the internet. And every time Microsoft released a new version of Office, they changed the standard. And all of a sudden you would get documents in the email, documents on websites or whatever, and you would have to buy new software, new Office, to be able to view that document. And all the open source versions of a Libre Office or Open Office, whatever it was at that time, would instantaneously be broken. Because there was no standard. There was no standard body controlling what was happening on the document. So CoreOS came along and they basically said they wanted to standardize that document format. Standard was not the document format. Standardize the image format, and they opened up a request for everybody to review this thing called the App-C spec, application container specification. At which point, all the people that were working on containers got together and said, we can't have two standards for this. We can't have RPM and Debian again. And so they basically came together and said, we have to have a standards body to define what a container image is going to be. And that standards body became OCI, Open Container Initiative, and it's a standard definition of what makes a container. So when you want to run a container on your system, the first thing you have to do is identify what the hell a container is. So now we had a standards body that said this was a container, and about a year ago, the OCI image bundle specification came out that defined what goes in the JSON file and what goes in the top file. So now we have a standard for what these things are. So when I say I want to run Fedora or I want to run Nginx, I have a standard that defines that. So now we take those container images, and we're going to store them in a container registry. Container registry is really just a website. But there's a protocol to be able to communicate with that. So now when I want to run a container, I need a mechanism to go to a container registry and pull it to my local system. And so a few years ago, Antonio Murdochka, who's someone in here, basically, we opened up a pull request with Docker to basically look. We wanted to pull down that JSON file that's associated with the image. We wanted to pull it to the local system to be able to look at it. Because some of these top balls can get enormous. They can get well over a gigabyte. So what we wanted to do is basically do a Docker inspect dash dash remote. And we went to Docker, had a pull request, and they said they didn't want to pollute the CLI. They didn't want to have these new options come in. But they said, it's just a website. Just go out there, grab the JSON file, and pull it back. So Antonio built a tool called Scopio. And Scopio was a tool for pulling down the JSON, originally it was just for pulling down that JSON file, or the OTI specification, to the host and then being able to examine. Well, Antonio didn't stop there. He basically continued on, and he implemented a whole bunch of protocols, like the pulling down the entire top ball. And he also implemented pushing the top ball. And eventually, this tool grew to be able to move container images between all sorts of different kind of container storages. So you're able to move from one container registry to another container registry without being root. You can move it to your local system. You can put it in a directory. It actually can transition from different types of container images, and we're going to be talking about a thing called container storage in a minute. So now we have a mechanism. Oh, so CoreOS came to us, and we were trying to get convinced them at the time to use Scopio to move their container images around. And they said, we don't really want to exec a tool. We want to have a library. So we broke Scopio into two parts. So we created a new library called GitHub Containers Image. And it's all the source code to move container images from container registries to local storage. So now we have a mechanism to define an image and pull an image. The next thing we need is a mechanism for storing the image on disk. Container images tend to be layered images. You have sort of a base image. Then I put a JBoss on top of it. And then I put Apache on it. And I put JBoss on top of that. So I might have multiple layers. So in order to do that, we needed a special kind of file system called a Copy and Write File System to be able to store those and to build those and create those. And that thing is called, we created a separate library called GitHub Container Storage. And usually, people think of things like an OvalA file system, device mapper, butter, FS. There's all these file systems. So we originally put the code that we contributed to Docker. And we pulled it out. We wanted to put it into a separate library. So that could be developed at a different rate. So now we have the ability to define an image, pull an image, store it on local disk. The last thing we need to do is to actually be able to run the container. And what we needed is a mechanism to define what it meant to run a container on your system. And that we wanted to have a specification for that, because we wanted to be able to run containers in different way. But what you want to do is you want to take the influence of three objects and combine them together into a specification. So when I run a container, the container basically has some content, a container image. Remember, it has that JSON that says, these are the environmental variables I have expected to be set. These are the working directory I want. So we have some specification from there. Then we have a specification from the tool that's actually pulling the image. So tools like Docker and Podman basically define sort of hard code or at least put it in a config file somewhere, content that defines what it expects to be what happened when you run a container. These are things like what security, what the SC Linux label is, what the capabilities it's going to drop, things like that are built into the container engine tools. And then finally, we take user input. User input, that's the way you override stuff, right? You say, well, I want to run a container, and instead of running the default entry point, I want to run bin shell. So we combine the user input, the tool that's pulling it, and the content inside the image. We combine all this together, and we write out yet another JSON file. That JSON file is now defined as the OCI runtime specification. So we have a specification. And because of that, other tools can read that specification and interpret it and do containers in different ways. So you might have heard of Cata containers, there's RunC containers, there's GVisor containers. But basically, all of these guys are following the same standard, so every container engine can generate a specification and then launch whatever container engine implements the ability to read that specification. RunC was the first, is the default implementation of the OCI runtime. So it's a tool that interprets the runtime specification and executes it. Almost everybody in the world that's running containers now is running RunC. Docker uses RunC. All the tools I talked about are using RunC, Cryo, ContainerD, Builda. Everybody's using RunC as the fall to run containers. So now we have the full definition of what it means to run a container. That definition of what a container is, ability to pull it from a container registry, ability to store it on disk, and then to launch the container. Anything missing from this? Oh, yeah, we need a couple more things. We need a standard way to set up container networks. So once I get the container, I want to be able to hook up a VPN to it. And we needed a standard for that, and CoreOS actually introduced a thing called CNI. So it's a plugin infrastructure for running containers and pretty much Kubernetes. And all the tools I'm going to talk about now use CNI for defining the network and allows third parties and whatever fancy networks to be able to plug in to these tools that are able to run containers. Lastly, there's a tool called ConMon that we use, ContainerMonitor. So when I run a container, I just start up the first process in the container, and then the tool goes away. It doesn't have to sit there watching and monitoring the container. But we wanted something that, a little tiny program that can actually watch what's going on in the container. And that we created a OC program, and it's called ContainerMonitor, or ConMon. So for every container we launch, we have a little tiny program, and this allows us to take our services down, bring them back up, and reconnect to all the containers that are running on the system. There is no big fat demon in this. I have a big thing about us, everybody in the world, when Darker came out, figured that there was only one way in the universe to run a container. And with the big fat container demon, you end up getting things like, I want to be able to run my containers in production, I want to build containers, I want to just play with containers. Well, all three of those have different security goals. And we ended up with the least common denominator of security. Building containers requires a lot more privileges than just running containers does. And fooling around in developing containers requires a lot more privileges. And because we only had one way to run containers, we ended up with the least common denominator of that. So I wanted to take that. Basically, we have those four things. We wanted to take those apart and say, let's run containers differently. Let's look at tools that can run containers differently. So one of the tools we built is a tool called Podman, Podmanager. And Podman is a tool for managing pods and OCI containers based on the Docker CLI. So if you want to run, see where containers are on the system, podman ps-a. If I want to run a container, it's podman run-t fedora sleep. If I want to exec into the container, if I want to list the container images, everything looks familiar. When I'm running Podman on the system, and I'm about to do a demo, this is what it creates. This is sort of the design of what's going on in the container. There is an individual con mod for each one of the containers. And Podman creates pods, not really containers. What a pod is is a Kubernetes concept that allows you to have one or more containers in the same network namespaces of the same C groups, and basically allows you to sign those together. When I create a pod, the first thing it creates is an infracontainer. So an infracontainer is what we call the pods container. All it does is it comes up and goes to sleep. And its only real job is to maintain the namespaces. So it opens up and holds the namespace. I can see namespace, name space, pin namespace, and C groups. And it just sits there running. Because if I kill that process, all those namespaces, all that identities go away. Then I run containerA, which is sort of the primary. So if I just said run the fedora container, it'll create an infracontainer, and it'll be running a single container with the infracontainer. And that is sort of the primary container of the pod. Now I can also run additional containers in there. Usually we call these sidecar containers. They're basically containers that can monitor the primary container inside of the pod. But for the most part, when you think of most people, just run a single container. But when they run it with PodMan, you're actually creating something that looks like this. Anybody want to demo? Let's live on the edge. OK. Everybody see this there, right? Everybody see me typing my password? OK, so this is, I think, last week we announced, look at that. It doesn't even work. As of last week, we introduced 1.0 of PodMan. So I guess sort of January 14th. So PodMan, as you see, this is just simple PodMan version. And it goes out and it lists 1.0 of it. What version to go we built it with and some other interesting information. It looks very much like what you get from Docker. So now we show you PodMan info and scrolled a little bit off the screen. But the interesting part is actually down at the bottom. So we talked about container storage at the beginning of this talk. And here we are pointing to where container storage is. And you can see we're using the overlay. In PodMan, we use overlay. It is overlay 2 in Docker, but we didn't feel like no one uses overlay, and they all use overlay 2. So we just called that as overlay. And it stores it here. You can see different information about where the content is. But above it right here is interesting thing, registries. When we built PodMan, we didn't believe that there was only one place in the universe to get containerious. We didn't hard code. Everybody has to go to Docker I.O. What we wanted to do is allow you to, similar to what you do with YUM or any repository, you can list your repositories. So when you go to pull an image with PodMan, you can pull it from Docker I.O. or Fedora Project from Quay, from Red Hat, from Sentos. And it'll go through and search each one of these when it's executing. So it'll figure out where your container is. So an interesting thing, if we break apart containers into sort of these lower-level concepts and don't have to have just one way of running them, we can actually do containers within containers. So what right now I'm going to show you is another tool we built is called Builda. And I built a BuildaContainer image that I'm going to use inside of PodMan to build a container. So I can actually imagine building containers. Now everybody get off the internet so I can use this. So basically it goes out and it's building a container image using, as you can see, Builda is running inside of the container here. I've taken, I created sort of my own storage so I'm going to put in the container. I'm using isolation charude because I'm already in the container, so I don't have to use so much more container stuff. But basically I just built a container image inside of a container there. So now I'm going to, let me clean it up. So an interesting thing that Giuseppe Scrovano, is he here? Anacura, these two guys up here in the back, did some really interesting work looking at containers and they wanted to be able to run containers as non-root. Now people say, oh, I can run Docker as non-root. The way you run Docker as non-root is you open up the Docker socket to the home directory and you allow people to communicate with it. Well, if I can communicate with the Docker socket, I have full root on the host. I have the ability to do anything I want on the host and it's worse than giving me sudo with no password because I can destroy the logs of everything I did on the system. But basically what we wanted to do is be able to run containers without root and we're taking advantage of a thing called the user namespace. So here I'm no longer using sudo to do this. Oh, please don't tell me it's going to blow up. OK, you're going to have to believe me that it works. Hold on here. This is why you don't run a demo on your bug machine. It's not a 7.4.0 problem. It's a bug report where someone was running containers in a separate directory. Now is that a recovery or what? OK, so that basically just pulled down the Alpine image as non-root and now I'm going to run a container on Alpine. So it ran in my home directory. I just did listed the top level of the root FS of the Alpine container. So I can actually show you PS of the container running, the Canadians in my system. I can show you the images on the system. I just happen to have Alpine. It's the only thing in my home directory because you just saw I blew away my entire storage. But if I run sudo again, you can see that the image is on the host. So there's a separation. I'm building containers in my home directory versus the containers that are on the existing system. So you can see there's a whole bunch of containers there. But that shows that I'm not basically doing some kind of hacking thing like sneaking you into root while it's going on. So interesting. How does this work? So the demo is about to show you a tool called Build A Run Share. But when I'm running containers in my home directory, I'm becoming root temporarily. And the entire Linux operating system is set to do this. But it's taking advantage of user namespace. So I'm going to do a quick demo of user namespace. But if you go onto your latest Linux type systems, anything beyond REL7, and REL7 will have this feature in 7.7, which has come out in the summertime. There's a file, a new file on your system called Etsy sub-UID. And what happens in Etsy sub-UID is we have a mapping that takes your, oops, run button, that takes your username, dWalsh. Probably you guys aren't called dWalsh. And it basically sets up UIDs, allocates UIDs to your username space. And you get to control these additional UIDs. So what we're mapping here is 100,000 to your username space, and it's giving you 65,000 UIDs. So that gets assigned. And the next user gets the next group of 65,000. You see there's a test user here. And he starts out. And every time you use a rad onto the system, this file gets populated. When I'm running the user namespace, so here I just type builder on share. And guess what? I'm root. Suddenly I'm root on the system. And inside of this container now, I'll prove that I am root. So I'm running root in the system. So let me take a look at my home directory. And right now, someone's tweeting out on the back of the room that Dan Walsh has root files all in his home directory. He runs his root on the system. So it's interesting that I have all these root files on my machine, right? Pretty stupid thing to do. But if I go into a different window, I have all the files that are owned by D Walsh. Let's look at this a little deeper. I can go into the container now. And I can do a make-do-or-test. Make-do-or-test. And I'm going to go into cd to test. And I'm going to touch Walsh and a tone home directory. I've got files owned by bin-bin. Pretty cool, huh? Let's look at what happened in my real home directory. Am I outside the username space? It created a file owned by UID 100,000 as bin. So if I go back into Unshear, I'll show you what happened there. So if I cat out croc self UID map, what it did when I set up the user name space in this case is it mapped UID zero inside the container to be my UID. So it mapped it as 3267, which is my UID. And it said for a range of one. Starting at UID 37, it will map one container to that. Then it started mapping at UID 1, 100,000 for the next 65,000 UIDs. So when I created a bin-bin file inside of the user name space, that's actually creating a UID 100,000 file. If I exit the user name space now, and I try to remove Walsh, it basically is going to give me permission tonight. Because now I'm not in the user name space, and I'm trying to remove a file that's not owned by UID Walsh. Inside the container again, or inside the user name space, I'm able to remove the file, so I can only remove the files inside the user name space. So user name space for rootless containers is pretty cool. But I can also do some interesting stuff across mine. OK, so I just created a container, the sleep container, as UID, this case is outside the user name space. And in this case, I'm mapping in UID 100,000 as root inside of my container for 5,000 UIDs. And so I just created a container on the system. And there's a really cool thing we've been introduced to Podman that basically allows you to show the UID. So here I'm saying show me the UID inside the container, as well as the host UID. So it says the container is running as root inside, and it's running as 100,000 outside. So if I look at through the PS command to see what the process of sleep is running, it's running as UID 100,000. Now I'm going to kick off another container, but this time I'm kicking it off using UID 200,000. And I'm mapping 5,000 UIDs. So I just show you that it's running as that. Now I have the container is running as 200,000. So if the container, the first container breaks out, it's running as UID 100,000, and it can't attack the UID 200,000. So it's separated by user namespace, but we can now take advantage of user namespace. User namespace has been around for a long time, but now we can actually use it for separating containers. This is something that Docker and nobody else in the world can do at this point, but we can do it with Podman. So an interesting thing about Podman is that Podman is a fork exec model of containers. So if we look at it, there's a file that gets created when you log into the system. So it's put into your process level. So basically, cat, proc, self-logging UID. This UID gets set exactly when you log into the system, and it follows you everywhere. Oh, boy, I'm already running out of time. OK, so basically what happens here is we can record that you de-wallsted something. So here I am, I'm executing sudo, that's executing Podman, that's executed. So that's basically sudo creating a root process. Then I'm running Podman, which is going to create a container, and then I'm cating the same file inside of the container. That creates the UID 3267, and shows you that Dan Walsh is running inside that container. If I do the exact same thing with Docker, it shows it as 424 billion to some huge number. Well, that huge number represents minus 1. So Docker is a client server model of containers, and what happens here is it actually talks to a demon. So there's no record that you did anything on the system because of this client server. So why is that important? I'm going to run a container where I'm actually breaking out of the container and I'm modifying Etsy Shadow on the system. Anyways, that's why you don't oversleep. Anyways, if the auditing sub-system was working, and I can show you this later, or you can download it yourself, it would have showed you that Dan Walsh modified the Etsy Shadow file when he broke out of the container, whereas if the Docker modified the container, it would have basically showed you that minus 1 modified the Etsy Shadow file. Because we're using a fork-exec model, we have better security, and there's a deep talk later on at, I think, 3 o'clock this afternoon that really goes heavily into these security features. So let's look at a couple more containers. So one of the really cool things that was added is basically also the ability to look at the PIDs associated with containers. We can actually look at the labels associated with containers. We can actually look at the capabilities, whether or not SecComp is enabled, whether or not the auditing subsystems. So these are all questions. Yeah, are my containers secure? What are they running with? So with Podman now, we have tools that allow you to look at different security mechanisms. And again, that's going to be covered this afternoon. So let's look at pods. So we've been talking totally about containers. And what I want to do at this point is I actually want to create an actual pod. So pods are groups of one or more containers, so I'm going to create a pod called PodTest, and then I'm going to launch two containers inside of it. I actually created them. I haven't launched them. So if I look at my system right now, I'm running no containers. So now I'm going to create a pod. So I assign two sleep containers to the pod, and I'm going to stop the pod. And now we're going to actually, and you see, there's two containers running on the system. So what happens when I stop the pod, it actually goes out and stops the containers. If I want to stop the pod, it's going to stop the containers. So at that point, it basically stops both containers. So real quick, since I'm running out of time, I'm going to show you one last tool. So we wanted to be able to, OK, so what I'm doing here is I'm actually going to run a pod. I'm going to run a series of pods. So I'm running three different pods. So I ran three different pods. Each one of the pods, the first one, is an on-emack number generator. The second pod was a database to store the automatic generator. And the third one's a web front end. So I'm basically combining all three of these together, and it ends up launching an application, the web front end. It's basically showing the random number generator graphed to the desktop. So that's basically three pods running inside a lib pod. But I didn't want to, we didn't stop there. So we show you that there's these three containers running inside of the pods. And now there's a new tool called pod man generate. I got five minutes. Pod man generate allows me to generate Kubernetes content out of the running containers. So what we were thinking is basically, people understand how Docker works. What we want to do is we want to be able to create a mechanism to go from sort of the traditional Docker world into the Kubernetes world. So pod man now has a mechanism for looking at what the containers that are running on the system and generating Kubernetes, that you can then take and directly launch into Kubernetes. So this is basically what the help of that is. So I'm going to generate Kube. So this is basically Kube YAML based off of a standard pod man running of a container. So you can basically use the traditional Docker workflow that you're used to for running containers and then generate the YAML that can then be used to put inside of a container. So I'm going to generate three of these. And this was not working late last night, so we'll see if a miracle happens. So now I'm going to go to Kubernetes, and I'm basically going to start generating so it failed. But basically, if you had a proper Kubernetes setup, you can basically start to generate all the three exact same containers inside of Kubernetes. And then you can start playing with Kubernetes, things like basically putting those containers, maybe putting the database on the web front end on multiple different services. It basically shows you the stuff attempting to run inside a pod man, but inside of Kubernetes. And that should work. But at this point, I'm not doing it anymore. So we'll open up to questions. Any questions? So pod man allows you to generate containers, generate pods, pretty much do anything you want. Sort of the most popular feature of pod man right now is running it as a non-root. Everybody gets excited about being able to do containers as a non-root. But basically, we believe that pod man is a great tool for replacing your standard Docker CLI. There's the rest of this presentation, but I'm running out of time. But so the question he's asking is, some people connect directly to the Docker socket and use the API. Well, pod man doesn't implement that API, so we don't implement the Docker engine API. Well, we implement this thing called ViLink. So ViLink is a mechanism for people to talk remotely to pod man and launch it. So we have ability to run. We have a library for Python. We have other programs. People are doing cockpit talking to pod man via this ViLink API. We actually have a tool we're working on right now called pod man remote, which allows you to run on top of a Mac talking to a pod man running inside of VM. So pod man has a full remote API. We don't intend to implement the Docker API, but we don't intend to implement Docker Swarm, obviously, because we're Kubernetes. But pretty much everything else. Everything you would traditionally do, I would say 95% of everything you would ever do with the Docker CLI we have, and then we're going to extend. We want to extend and really get it into the Kubernetes world, to move it into the Kubernetes world. Mac, well, I just actually mentioned that. So there's a thing called boot to Docker. We're working on a similar thing called, well, we're calling it boot to pod man or boot to CoreOS. And our goal is to have a client that sits on a Mac talking to a VM that's running pod man inside of it. That's exactly what Docker does now. Because you're running Linux containers. You can't run Linux containers natively on Macs. So that tool will work on both Windows and on Macs. Well, so you're asking about Docker Compose. Our goal is not to get in. The Docker Compose is sort of like a half-baked solution. Our goal was actually more to go towards the Kubernetes world, so we're really working towards supporting what we want to be able to do is to import Kubernetes YAMLs into pod man and be able to launch multiple pods that sort of match what the YAML is. So we're looking at more of how do we integrate better into, to allow people to move from the Docker CLI to Kubernetes and back from Kubernetes back to what they're more familiar with, the Docker workflow. So right now we don't intend to do a Compose. This is a fully open source project, though. So if you want to do compose-type stuff with pod man, we're fully open, and we'd probably accept requests to do that. Yes. What's my future vision? Well, I mean, we want to, I'd like to get to the point with pod man that it automatically runs everything in a separate username space for more security, because I'm a security guy. We have visions of better integration of the tooling on the system, getting the Mac stuff working properly so you can have a developer workflow from that. There's a whole page of stuff we plan on doing. Better integration with Cryo, so if you're running containers in Kubernetes, pod man interacts as well as possible with those. But pod man, build a cryoscopia, all share that same container storage so they can all work together with their individual stuff. Yes. The list, I will post it. But if you follow, we put all blogs regularly. Go to podman.io, it lists all the blogs that are available for us by github.container.slashlibpod is where podman is based out of. Anybody else? OK, sorry about the demo failing, but that's what happens with live demos. OK, thank you very much.