 So, hello everyone, hope you're hearing me well. Welcome to DevCon, my name is Lenka, I'm the moderator for this session. Our next speaker, Valentin, will be talking about auto-updating containers, podman on VH. There will be some time for your questions after the session, you can write them into the Q&A section. So without any further delay, Valentin, you can start. Thank you, Lenka. Thanks for having me, always a pleasure here at DevCon, really great organization, always cool. Even virtual, which is always a little bit odd, I find it's still a great pleasure. Pretty sensory gods, she's out of office today, so you have to deal with me. I'm Valentin, I work in the Container Runtimes team at Red Hat. And before clicking on the next slide, I want to have a common definition of what the edge is. There's various definitions out, at least that I've heard so far. And my favorite one is really computing outside the data center, this could be, you know, a light bulb, an oil rig offshore or a car. And managing such workloads is of increasing importance at the moment. And this presentation covers how Portman can be used on the edge today and the things we've built together in the past, we're on two years with the community. So to the agenda, first we start with Portman and SystemD, I guess a couple of you have already heard me talk and write about that, you know, how we can run SystemD inside Portman, how we can run Portman inside SystemD services. This really is the base requirement for what we are about to do on the edge at the moment. Then I also want to elaborate a little bit on Portman's architecture. This is important to understand why Portman, in contrast to some other container engines, integrates pretty well in SystemD in a modern Linux desktop, more about that later. And then finally, we can talk about auto updates, you know, how do auto updates look like from a bird's eye perspective conceptually, how we implemented it with Portman. And then last but not least, since Portman 3.4, we also support simple rollbacks, which means if an update fails, we were back to the known working state. So I'll skip the usual introduction slides to Portman. I think the DevCon community has heard, or has heard until now, plenty of talks. If you don't know Portman yet, it's a drop-in replacement for Docker. And we added quite a lot of new features to it. And later on, I'll explain a little bit the difference with regards to the architecture. But when it comes to Portman SystemD, the team, and I think Redhead in general, really tries to come up or follow the containers are Linux philosophy. Containers in the end are just ordinary processes on our Linux system with some attributes that are different than if I would run, you know, a common or an ordinary binary on the host root of S. We really want to focus on a seamless integration into modern Linux systems. And, well, arguably, SystemD is an integral part actually at the center, the heart of a modern Linux system. So supporting SystemD in conjunction with containers was really important for us. Historically, it has been difficult because it was hard to integrate into client server architectures. I'm going to talk later a little bit about that. And Docker upstream really didn't target, for instance, supporting SystemD. So we were facing a couple of challenges because users were asking for it, customers were asking for it. We wanted to run it. And at some point, you know, when Popman was born, we could just made it a first class citizen in Popman and just support it by default. When it comes to containers in general and SystemD, there are two scenarios that we should discuss. The first one is really running SystemD inside a container. Why do we want to run SystemD inside a container? Well, I think the most important argument for doing that is portability. Again, containers are Linux and there shouldn't really be a difference if I install a package or run, let's say, HTTPD inside a container in contrast to running on the host. Historically, as I said, this has been difficult because it hasn't been supported for a long while at Docker. So users had to come up with custom scripts. They were pretty much forced to write their manual startup scripts, which was not the best user experience. It was also pretty hot for software vendors such as Redhead to support it. How is a company supposed to support custom scripts? So running inside a container really gives a huge portability advantage. The second scenario that we're going to elaborate later on as well is running a container or a potman inside a SystemD service. So containerizing SystemD services. Again, containers are Linux and there shouldn't really be a difference. And I always find it pretty cool because it's like a marriage of rather traditional Linux sysadmin work where we're using SystemD, we're writing our service scripts, our dependencies are managed. Everything is, you know, explorable. We can just use the tools that we're using for a long while already, but also in conjunction with the cloud-native world and make use and benefit from everything that has happened with containers in the past almost 10 years. So to give a brief example of SystemD inside a container or inside a potman, it's really no rocket science. Simply, you know, SystemD simply needs a specific environment to be set up, which mostly boils down to a couple of mounts, such as Varlip, ShornelD, to be mounted as a tempFS. And then we can start SystemD in the container. So potman does that automatically when the container's entry point is either in it or directly SystemD. You know, potman looks at the entry point of the image or what the user has specified on the command line or via the REST API. And then it does all the mounting dance automatically for us. But it can be further tweaked by using the dash-systemd command line flag. And here what we can see in the example in the terminal is, you know, we run a UBI8-init container, which is a UBI8 image with SystemD pre-installed, which is pretty cool. And if we run potman top on this container and we list the process IDs, the users, and the commands, we can see that, well, pretty cool. SystemD is PID1. It's the end process, which is exactly what SystemD is supposed to do. And then there's also the Shornel and the diva steam running. Now to the second use case, when it comes to running containerized SystemD services, this is a bit more rocket science. There are many things to consider, you know, to properly integrate a container engine such as potman into SystemD. There are many moving targets. There's not only potman running, but there's going to elaborate on it in a moment. A couple of other tools, even more when we run potman as an ordinary rootless user. But potman simplifies this adventure with potman generate SystemD command. So the input for potman generate SystemD are either containers or pots, and the output is, well, SystemD units that we can then install on the host, either run as a system service, as a root, or as a user service, which will then be a rootless container. We really improve it continuously with best practices from upstream. You know, we work a lot together with the community. It has been well received, which is super cool. And for sure, also downstream Red Hat. Things that we notice on customer sides, feedback we get from customers. Conversations we're having with other teams. And one example for conversations we have with other teams is the future work is, you know, really great addition taking it to the next step is a more declarative approach via dot container files and SystemD generators. And there's a cool project. So shout out to Alex Larsen for coming up with Quadlet. If you're interested, check it out. It's similar to a Docker Compose file or Kubernetes YAML. You know, you have an easy declarative way config file where you can say, OK, which image do I want to run with which containers or which commands. And then Quadlet takes care of SystemD fine everything. It is a really, really cool thing. If you want to learn about the details or really dig deeper into how Popman and SystemD work together in this scenario, feel free to refer to this block. The team blocks a lot. We really write a lot about what we're doing. So if you're interested, just click on the link. I will upload the slides after this presentation. Now to Popman's architecture. It's really the enabler to run containerized SystemD services. It hasn't been really possible with Docker before because SystemD really wants and needs to know which processes are running in a service. And well, with client server architecture, it's pretty hard. And it really wants to pick and know a main process in order to manage the lifecycle. If the main process exits, service isn't running anymore. So here, really a quick comparison about the architecture. When we look at Docker, you know, we have the Docker client, which usually traditionally runs as a rootless user, but it is part of the Docker group, which gives us access, you know, read and write rights to the Docker socket, which in turn runs as root. So when we do a Docker run, it really sends a remote procedure call to the Docker demon, which in turn talks to another demon, which then finally does a fork access to the container runtime, which is run C or C run, which then last but not least, finally eventually runs the container. So when we have a SystemD service or SystemD unit file, which has an exit start, Docker run, yada, yada, it's pretty hard for us and SystemD to know what is the container because we really want the container or, you know, a managing instance of the container to be the main process. So the team a couple of years ago has tried to for Docker to make it to make it work and it worked to a certain degree, but the pull requests have been rejected. As I said, it was not really the target of the Docker community to support this use case. For Potman, it's a little bit simpler, the architecture or quite simpler. We have Potman, then we have Common, which is short for container monitor. So it's a very, very, very, very small tool written in C with an incredibly low memory footprint, which in the end, you know, keeps certain resources open, such as certain namespaces, file descriptors, and it execs the runtime. And also it has a callback to Potman since it's not running as a demon for things like cleaning up. So once the container exits, Common will figure that out because it's running for the lifetime of the container. Once the container exits, depending on how the container has been started by the user, for instance, with the dash dash remove flag, Potman will then have a cleanup callback initiated by by Common. So Common, in this case, in a system D unit or in a running service is the main PID. As I said, it monitors and runs for the lifetime of the container. So system D then really knows, OK, the service is up and running. Common also exits with the exit code of the container. So things like restart policies just work. So I think most of you know this young gentleman, Dan Walsh. He has excellent talks out there in the wild about the security benefits of Potman's architecture. I don't want to leave it unmentioned. The architecture of Potman really has huge advantages when it comes and benefits when it comes to security, container security. But going into the details is really beyond the scope of this presentation. So I really recommend just watching Dan's talks. They're instructive, they're entertaining and there's lots of things to learn. So slowly we're going over to to auto updates. But before going into the details, I want to have a bird's eye view on what auto updates really means in the context of a container. So let's take this example. We have a workload, could be anything, a light bulb, an oil rig, a ship on the ocean, a car, a train, whatever could be a fleet. We have a container registry and a Cezette man on this workload. We have containers running, using images from the container registry. So once the Cezette man pushes a new image to registry, we really would like the workload, which is running somewhere on the edge. It could be really the offshore to pull down the image automatically. And once there's a new image, they should pull it down and then restart the services with a new image. So it's as simple as that. In practice, well, we have a Popman auto update command since version 3.1 or 3.2. And it does exactly that. It checks the container in the used images, checks that the registry is their new image, pulls them down, restarts the container or the services that are using this image. However, the containers. So Popman must run in a system deservice in order for this to work. A blog post, a reference later in the presentation, goes into the details. So let's just stick with that. These systems or these services must run in a system deservice. It can be triggered, well, manually by running the Popman auto update command or remotely via a rest API call to the Popman service. But there's also pre-installed, you know, on Ralph Fedora Santos, Ubuntu, OpenSUSE, Arch, all Linux distributions that ship with Popman. Popman auto update service, so system deservice. So it can be integrated into, well, system D workflows. You can do a system CTL start. It can be integrated into some dependencies of whatever we're running locally. But there are also time-based triggers. Which can be fired with the Popman auto update timer. So it really integrates well into Edge and IoT because the SysAdmin really doesn't have to manage the fleet anymore. The fleet manages itself. The only thing that we need to do as SysAdmin is to push new images, update them, and then the fleet takes care of the rest. So why does it really make sense? Well, they're hard to reach devices on the Edge. I mentioned it before, offshore sometimes the connection aren't stable at all. If there is a boat somewhere on the ocean, there is probably no connection or at least not a stable one. Updates can also be scheduled and keeping the Edge service as secure and safe as possible from hackers is, well, increasingly important. And as I mentioned, since Popman 3.4 partnerships with simple rollbacks. So how do rollbacks work? We already discussed, well, this sequence here, how an auto update process would look like. We first push an image, the fleet or the workload pulls it down. It restarts the service with a new image. And well, it could happen that accidentally the SysAdmin pushes the wrong image. I am a terrible SysAdmin. I'm able to break everything I touch. So I would really love to have something, you know, some security net below me, which would be able to rollback to the previous known to work image if something bad happens. So this is exactly the fourth step here. We revert to the previous image if the update fails. So once we came up with the idea of implementing that, we were facing the challenge of how could we actually detect if an update fails? Because we, as we already discussed or have seen in the architecture figures before in the illustrations, the main PID of the service is common. And well, SystemD will mark the service as started, at least successfully initially, as soon as Portman sends the ready message via SD-Nodify. So once the container has started by default, Portman sends the ready message to SD-Nodify. So in this case, SystemD assumes, pardon me, that everything's fine. We're cool, the start or the service has been started successfully. But if we exit one immediately in the container, this would obviously be wrong. If the container fails after, you know, in failed initialization of the database, we would not really get these scenarios. So just by starting the container, SystemD would assume that has been started successfully. So in order to really detect if an update has failed, the container workload, so what's running inside the container, must send the message. And this was a quite interesting journey. We've been working quite closely with the community to get that working. So in order to get that work to work, there is dash dash SD-Nodify flag for Portman Create and Portman Run, which controls the SD-Nodify policy by default. It's, I think, common. In this case, we've got to set it to container. So when we set it to container, Portman will mount the NotifySocket into the container and Conman will serve as a proxy. It will just forward the, or mount the socket as well and forward all the messages to the hosts divas. So how would a successful update look like when we're running a rollback? So in this case, SystemD receives the ready message. An easy way would be just to install SystemD-Nodify in the container image. And let's assume we wait for a database to be initialized. This can take a couple of minutes, depending on the workload. Once it has been initialized, SystemD, you know, we can script it and send SystemD-Nodify ready and, well, we get a thumbs up. A failed update will either the start timeout kicks in by default. It's 90 seconds. It can be customized either manually in the generated unit files or also now with Popin 4.0, we can customize it on the command line directly. Another scenario where the update would fail is when the main PID dies without sending a ready message. You know, there's really a couple of things that can go wrong, but using the container-Nodify-SD-Nodify policy, this is a way how rollbacks can be implemented. And this really always depends on the workload, what we're running inside. This is something that the user or the vendor of the workload has to specify. But again, Popman makes it really, really easy with Popman-Generate SystemD to make use of that. If you're interested and want to know more about the details, Preeti, Dan and I wrote a rather detailed article at the end of last year about, you know, how to use auto updates and rollbacks. We really give an example, step-by-step instructions of how to do that. Also with a couple of anecdotes and background information of, you know, all the details that I could just scratch on the surface in the past 22 minutes. So as I mentioned before, the team blocks a lot, a lot, a lot. If you're interested in Popman, in the containers ecosystem, in general, or want to look up the blog post that I mentioned here, either go on the Teccon site, click on the presentation here on the talk, it will upload the slides, or go on popman.io. There you'll find all the information. There's a block. We also reference, of course, reference their blogs on other websites. You have information on the mailing list, how to reach out to the team. We have, you know, Matrix channel, IRC channels, and all the cool stuff that the cool kits use today. We blog a lot on retta.com, scissetman, on opensource.com, developers retta.com. And if you have any questions, if you encounter an issue, if you want to reach out to us or ideally contribute, well, reach out to us on github.com, containers slash popman. And that is the end of my presentation. And I'm looking forward to answer any questions. Thank you, Valentin. Thank you for your presentation. I can see a huge interaction in the chat and we have two questions. So quickly on them because we are running out of time. So the first question is, I had impression that Reddit wouldn't recommend podman for production use cases, but rather for development. Is that changing these days with podman being used as DH and on IOTS? If I read it correctly. IOTS, Internet of Things. Thank you, Lenka. And thank you also for the question. Well, I would definitely recommend podman for production use cases. I mean, podman is shipped since rel 8.0 as the only container engine. It has also been shipped before in rel 7. Yes, definitely use it. And if it doesn't work, please reach out. But, you know, podman's use case is single node. We are not targeting it to be used in Kubernetes or in OpenShift. This is really single node for developers and also sys admins. Or, you know, as I mentioned here in the example, when you want to run workloads on single nodes. So yes, definitely. Thank you, great. And another question, are there ways to avoid downloading full container layers on updates to save a band with OS 3? Yes, there is. We are lucky to have an extremely brilliant and talented engineer in our realms, Giuseppe Scrivano, who is working on that momentarily. He uses new C standard features. So there it's a rather advanced feature at the moment. The layers must not be in the traditional cheese compression, but in the C standard compression. And then it's possible also with podman to avoid downloading the entire layer, but only what the container needs. So there's plenty of stuff happening at the moment. Giuseppe, I'm not sure if you blocked on that already. If so, please, would you share a link in the chat or interact in the chat? Thank you. Thank you, Valentin. So that's it. We don't have any other questions, but you are all welcome to join Valentin at the work adventure. So that's it for this session. And now, so thank you for joining. Thanks, Valentin. And now you are welcome. Now you are welcome to join the stage room for the break activity. This time it will be a virtual victory with the Marine Ordin.