 So we're here to talk about Podman, otherwise known as Podmanager or the other way around. We've got a few things to share primarily around what's new for Podman 4.0, which makes its presence in the door 36. Just starting off with a quick description from our colleagues. I think this best describes Podman in a single sentence, with being daemon-less, open source, secure Linux native tool designed to make it easy to find, run, build, share, and deploy applications using containers and containers images. So if we look at what we're focused on and for just kind of a quick overview, we'll be discussing the new network stack. We'll be discussing Podman machine and what we have today and where we're headed. And then a bunch of Podman 4.0 highlights as it was a very large release. So for Podman 4.0, one of the big changes is a move on from our previous network stack, which was called container network interface and known as plugins, container network plugins and some packaging. Our new network stack is written in Rust with a keen sense on performance and reducing the binary size of our network stack. It really came down to two new components which are both separate GitHub projects. One is called NetEvark and that does all the interface configuration, firewall rules, IP tables, rules and port mapping. And then the secondary component is Ardvark DNS which is a container DNS server that we wrote. And this particular one is geared specifically for proxying as a proxying DNS server as well as being able to resolve, container network names amongst each other. The Ardvark DNS actually replace a setup that we had with DNS mask. And so this replaces that setup and is working quite nicely so far. Okay, so I'll be talking about the core benefits in the new network stack. Why you should care about this. Number one, improve support for IPv6 out of the box. Podman has supported IPv6 to a limited degree for quite a while now but we have had some serious limitations on how you did it. You needed a publicly routable V6 subnet needed to specifically route that V6 subnet to the machine that was hosting Podman. And at that point it would probably work. We've had varying degrees of success. The new stack is designed to work out of the box. We take a public or private rather V6 subnet and we net it the exact same way that V4 works. This is not as technically correct as the previous approach but has massive benefits for working out of the box. It's going to work pretty well. Basically, you don't have to worry about configuring anything. It should start. On top of that, let's see, yeah, advanced DNS support. Container DNS is a very important feature of Podman where basically if I create three containers, let's call them DB and frontend and web. I could have DB ping either frontend or web and by name, that is. So we have a DNS server for all the containers and it allows you to hit any other container by name. However, in the previous implementation we had with the CNI plugins, it was only able to work for containers in a single network. So we've expanded that now and it works with containers in as many networks as you want. And as part of this, we've done a general improvement to how networking works for multiple networks. You can join as many networks as you want with Podman Network Connect and you can now set different static IP addresses, static MAC addresses for the networks you're joining. Significant improvements over there. Speed, we have made the experience of Podman run significantly faster but this change. Turns out that the process of running a container, a sizable portion of it is just setting up the network stack for the container to use. Brett previously mentioned that they were called the CNI plugins or previous stack. That's because they had different plugins to do different parts of networking. So to actually set up container, you'd end up running about five of these. Net of Arc is a single binary that does everything. And because of that, we've actually, this was completely unidentifiable. We made things rather fast. And finally, we have a solid focus on the single node, a solid focus on what Podman needs. The CNI plugins are not actually a Podman project. They were in created to support Kubernetes. And we have had some conflicts in the past where they are doing things that are better for Kubernetes whereas we don't really want those things. We want things that are better suited to running containers on a single node. And eventually we decided that it was better to do our own thing, write our own stack that was just single purpose designed exclusively for Podman to do exactly what we needed. And that is what Net of Arc and Art of Arc are. These two are, we're pretty excited and expect a lot more from the future for them. Okay, now I'll be going over the general process of how networking at Podman works. Specifically with Net of Arc and Art of Arc, but CNI did the same general thing. So we start off inside of Podman, inside what we call libpod, which is the Podmanager library we use. It's basically the heart of Podman. And Podman has just been told to start a container. This only happens on start. If you create a container and let it sitting there, you do not have a network. We only set this stuff up when you were actually running one. And we're going to create a little configuration to handle down to Net of Arc. The configuration has basic information, what type of network mode you want. Mostly we talk about bridge networking where you have a bridge on the host and the container connects to it. But there are also other modes like Mac VLAN where the container is directly connected to an interface on the host. Basically it allows it to get an IP address on the network that the system is connected to as opposed to an internal IP address that gets added. And then once we have that configuration with any information that we need, static IP, static MAC addresses, et cetera, we head it down to Net of Arc. Net of Arc is going to start off by handling any global system of configuration that we need to access the internet. This is mostly CCDLs. We need to enable routing, we need to enable V6 routing, a few other things. Then we're going to create the network interfaces required. Most notably, this is going to be the bridge or a face I talked about previously, but we also have a VET pair, a pair of virtual ethernet interfaces. One of them is connected to the bridge on the host and one of them is connected to the container. And that's how traffic gets to and from the table. And then we are going to do some IP tables configuration to get a network address translation working, what we call NAT. And that's going to allow the container to talk to the internet. There are two types of NAT we need to do. One is a global NAT for the bridge, which basically says any container connected to this bridge, any IP address really connected to the bridge can be translated out to the internet. So that lets general traffic flow, but then you also have port forwarding rules. I can do, say, Podman run hyphen P8080 colon 80. So that means we need a, what we call a one to one rule, which is going to allow that one port to be translated to one specific IP on the bridge. And then we're going to call Ardvark or specifically we're going to add Ardvark configs. Ardvark is a container specific DNS server, but it's still a DNS server, which means it needs to know what IP addresses it's serving and what names are associated with them. So we're going to take the container name and we're going to take the containers IP addresses and we're going to shove them into the Ardvark config and reload it. And once that happens, any other container on the network is going to be able to pay it by name. Okay, so are you going to get Netavark and Ardvark is the next question. The answer is if you are upgrading from 35, probably not unless you explicitly want it. This is a new stack. We recognize there may be potential bugs. We don't want to break anyone's installation. So we are not migrating anyone who just upgrades, who was previously using Podman. If we detect that you have any containers, any images, any pods, anything that at all changed on your Podman from a straight default, we are going to keep you on CNI. If you want to move over to Netavark, we recommend that you do a complete reset of the system with Podman system reset that moves everything, container, pods, images, et cetera. Or you can opt to manually edit the containers.conf config file to change yourself over. But if you are on a fresh installation of Fedora 36, you will get Netavark by default. No migration concerns there. Brent, you're muted. Okay, one of the other big ticket items for Podman 4 is the Podman machine work we've been doing. And just to give an overview here, Podman machine is very similar to the mission of Docker desktop. Right now we're CLI only, but it essentially allows you to use Podman to create a virtual machine that runs a specialized version of Fedora Core OS. And then the Podman command on the host or the Podman executable on the host interacts with the system service running inside the VM. And this allows operating systems like Mac OS can't run a Linux native or Windows, which can't run a Linux native container to be able to take advantage of running Podman. It uses Fedora Core OS underneath. It's sort of a appliance-like approach and probably most important to people right now, no cost, no sign up, no registration. Just use Podman. We support Mac OS via Homebrew right now and it's a simple brew install Podman. In the future, we do intend to have a self-contained Podman package that will do the installation and help you configure your first VM or machine as we call them. It's supported on Windows. We have a guided install now that uses WSL for its virtualization, the same sort of user experience there. You can run Podman in WSL and everything is easy and taken care of. And you can also run it on Linux. Any distribution like Fedora 36 should be able to run it as long as it has Podman 4, its dependencies and Cumule. Quickly looking at some of the features, it's a very easy to use approach. Once you have it installed, for example, on Mac OS, you simply do a Podman machine init and Podman will go pull the virtual machine image down uncompress it, boot it with a specialized ignition file. And when it returns back and says, all right, you're ready to go, you can begin using Podman as you would on a regular Linux machine. We do also now by default map the sockets from the virtual machine back into the host OS. So for example, the system service for Podman, what a lot of people would think of as like the Docker socket is being mapped back into Mac OS so that you can use things like Docker compose or Docker PY to interact with the socket. And then we have default volume mounting and volume mounting, basically the ability to be able to take a directory from the host and mount it into the virtual machine and the containers can then take advantage of it. And the other thing is the port mapping. So when you normally run Podman machine on a Linux installation, for example, it would be common space to say map port 8080 or map port 8000 or anything like that for some sort of web application. We do that as well. There's a little bit of extra trickery going on in the sense that the port mapping occurs both on the virtual machine and the host machine so that you could, for example, curl local host at a port and it would be interacting with the virtual machines containers. Okay, now we transition to Podman's highlights and I think this is kind of what we're here for. Matt, quick time check, we have about 12 minutes. All right, so Podman 4 is our largest release to date. Over 78 new features, large number of bug fixes. You can see the general statistics there. Part of why this has been such a large release is that it's so longer, at least we usually do. Normally we try and get a release out every three months. This one took about double that to get the network stack right. One thing I will note, we only have a relatively small team working on Podman full-time. So not a few contributors, the vast majority of those are from the community. All right, we mentioned the new network stack and took a deep dive in it, but just sort of looking back at it, Matt mentioned the IPv6 support, the DNS support, improved startup time, and the focus on signal node networking. One thing we didn't hit is that NetEvark and ArtVark also work in for rootless users without any significant effort to make that go. We put quite a few new things in for CUBE, some of it came from the community, some of it came for us. This would be relative to the Podman play and generate commands. We now support Kubernetes init style containers, which is a container that can be part of a pod that runs and executes first. It's an excellent use case for having a setup, for example, do some database setup before the actual database container runs is a pretty good use case. We've got new volume support for config maps with PlayCube. We can now also build images on the fly similar to Docker compose. You just need to set up the correct directory structure and have a container file present and Podman will build everything before it brings it up. And then there are new command line options that have been added quite a few to PlayCube. A good, one of the ones that I like to let people know about is the replace. It used to be that if you ran a PlayCube instance, you'd have a pod or more running. And then if you ran it again, it would air out because the name is basically taken already. So, Matt, can you hear me? Yes. Yeah, we have a little burp there. Okay. So the replace command lets you override the name and as such, you don't get an error anymore. It just brings down what was there and puts up the new ones. Couple more coob related things where you now support injecting environment variables from field ref and resource field ref sources. We allow you to set default resource limits with PlayCube. And just as a public service announcement, we'd like to see you stop using Docker compose and our justifications for saying something like that is that Docker compose while is wildly popular is only useful for Docker. While we do support Docker compose, but if you begin using the PlayCube generateCube functions, you'll be working straight with Kubernetes YAML, which allows you to be able to take a workload or a set of containers from Podman and generate YAML representing those in a pod and then be able to push that YAML file to Kubernetes and off it runs. You could also do the reverse, which is to take a YAML file out of Kubernetes and run it on your single node by also replaying or playing a YAML file. We've got some machine improvements. I mentioned some of them in the specific overview earlier, but we mentioned the Windows support for Windows and an installer now on Windows. We've got the volume support in four and four one socket mapping and the ability to be able to change hardware allocations once you've defined your VM. We used to not allow that to occur. Okay, so we also have a number of enhancements for our pod functionality. If you're not familiar, Podman lets you group containers into pods similar to Kubernetes, but this is some cool functionality that we've had for a while, but it's not really gotten the amount of love it deserves and we're starting to change that now. The first things we're doing is adding support for the ability to add volumes, devices, security settings and CCDLs to your pod that will be automatically set in all containers that join the pod. So I can add a volume mount that mounts something into every container in the pod, a shared volume. And then let's say that that volume mount requires that I disable SC Linux. Dan is gonna hate me for saying that, but I can automatically disable SC Linux in every container in the pod so they can access the volume without issues. And this is really just the beginning. There are over 120 flags in Podman Run. I wanna see almost all those flags in Podman Pod create. So you can create a pod that has anything by default. Okay, so Podman 4 was the big flagship release and that was back in February, I believe, but we just last week came out with Podman 4.1. So just wanna hit a couple of highlights on those. We've with Podman machine now, we've got a default volume mount being added. And so what that's doing is mapping dollar home from the host to dollar home in the virtual machine. So if you were on macOS or I was, it would literally be users print body on the host. And within the virtual machine, it would be users print body. And therefore you can take advantage of that by knowing that it's gonna be statically there and you can run it with the container. We put Podman on a diet and we're able to reduce the Podman's binary size by 15% relative to Podman 4.0. We were able to get Docker Compose version two working. If you haven't followed along, the initial Docker Compose was based on Docker PY Python and the new work that Docker's been doing, it's all in go. So we now will support that. And we've done a number of enhancements to build including better support of build kit. So Podman build now has improved support for build kit. And I should clarify here that we are not directly implementing build kit, which is a new way of building containers. More accurately, build kit has a bunch of additional options for building containers that the original Docker build, Podman build do not have. So we're just adding those options to Podman build. We're gonna make ourselves compatible in that way. The biggest things here, we have a bunch of new mount types. Most of these were already available in Podman run, but now you can do cache mounts, bind mounts, tempfs mounts into your builds. You have better control over outputs, including the ability to output directly to a tar ball, directly to a directory, instead of creating an image and then exporting that image. Improved multi-architecture support, you can now explicitly specify what architecture you're getting your base image from, using the from instruction. And some minor enhancements to manifest lists. Previously you had to use the Podman manifest tag, I believe the command to tag them. Now Podman tag does as well. Okay. We are nearly complete here, but just wanna talk about our community. Podman does have a happy, healthy project going right now. We wanna make you all aware that we have a monthly meeting. It's the first Tuesday of every even month. We focus on talking about project news, we do a lot of demos, and we actually ask users of Podman to come present ways in which they use it or side projects that they're working on that are relative to Podman. And then on the odd number of months, I believe, then we have what are known as community tabals. And here, these are basically discussion and problem-solving discussions where they center around features or large bugs that we need to have that warrant larger discussion than just one person going in and fixing it or implementing a feature. We also have an email list for people to ask questions, support things along those lines, and all of that can be found on podman.io. Speaking of which, we'll end here with some of our social media and communication avenues. Again, most of the action is on GitHub. I mentioned podman.io. We also have a Twitter handle and our newly renamed YouTube channel. And that is it. We have a couple of questions in the Q&A. We'll try and get through one or two of them real quick. First one is, what about Podman containers without Cates? Any Podman compose equivalent? So that's what Podman playCube. So basically, we believe Kubernetes YAML files is the replacement for Docker compose. So we want to work with the same way Kubernetes does, same format. Okay. Somebody mentioned specialized Fedora Core OS. Does that mean you're building or we're generating a different Fedora Core OS image or just adding an extra layer on top of the existing one for the Podman machine? We are, right now we are kind of painted with ourself in a corner because we're releasing Podman 4 and 4.01. Yet Fedora Core OS is anywhere from two to six weeks behind. So it takes us, it takes a little bit of time for the Podman packages to make it into one of the F cost streams. However, so we have been making one that include the latest stuff. However, the plan is as soon as 4.01 filters into some of the F cost streams, we will turn off the R builds and it'll switch over to F cost proper.