 Well, thank you for joining me today on our session about rootless containers with podman And I hope that over the course of this session you learn something new That you look forward to trying out podman and playing with some rootless containers And I'm afraid you'll probably realize why I have some trust issues So here's just a high-level view of today's agenda We're going to cover some of the underlying technology around containers and podman But there's an assumption here that this isn't a newbie session. We're not going to do a container one-on-one So there's an assumption that you know some basics about containers and darker and the fundamentals of containerization technology And then we'll dig into the fundamentals of why you would use rootless containers and why you really should care about this It's quite an important thing. And then we're going to implement it using some simple examples and Look at one that's gone well and one I'm still having some issues with and what are the positives and negatives are and perhaps a few Constraints and hopefully along the way. I'll give you a few pointers on how to get yourself started with running rootless containers So first we're going to dig into some stuff around container standards now Docker has Really kind of kick-started this container revolution But thanks to groups like the open container initiative. We now have You know standardized runtime interfaces like cryo This means that we now have a standardized way to initialize and run a container image and cryo is really important here because Ultimately was designed as a container runtime Suitable for Kubernetes So it will run any OCI compliant Docker compatible container image So you don't have to build your images with Docker your images simply have to meet the standard specification, which was donated by Docker What OCI means with this and what we get out of cryo is that We now can have a much more reduced attack surface when it comes to how we start and stop and run Initiate containers and it's a key part of the Kubernetes project and this has a close relationship with podman With podman what we're doing here is we're taking an alternative approach to the tooling normally associated with Docker Docker actually presents quite a large attack surface when it comes to Running building maintaining containers because it's kind of like a bit of a Swiss army knife It does lots of different things. It's not just a container standard It's a way to run a container's way to build a contains way to manage your container images and so forth We took the decision within red hat and our open source communities to kind of split these capabilities into different tools So podman's very much focused on how to run a container Scopeo is all about image management builder Provides us tooling so that we can build a container Without some of the overheads associated with Docker. So let's dig into each of these So podman is all about giving you a very familiar experience if you're an existing user of the Docker CLI It's great for running building and sharing containers outside of platforms like OpenShift just to use it on the command line And today you can use it on other platforms than Linux in a client server mode, but For today's demo, I'm actually using it natively on our Linux environment It's easy to be wired into a lot of the ways that you would normally Use the Docker CLI today and thanks to some recent extensions We now have an API server compatible with the Docker API It's great at Maintaining and running non-root containers and integrates very very cleanly with things like system D And it's pretty much the standard Docker or container run time interface that we ship as part of things like rel But also in other distributions particularly fedora and we're starting to see adoption Outside of kind of the fedora red hat ecosystem quite broadly today because of some of those benefits particularly the much smaller attack surface Builder on the other hand is all about building OCI compatible container images that match that Format as donated by Docker. What's great here is that you can Build multiple ways you can build in that old traditional Docker file way You can multi-stage builds or layer things up, but you do this outside of the container image So historically you needed to initiate a container and then within it you built its layers up That could lead to Leaving development tools behind maybe some security keys behind in an image that you didn't want there I Also tended to lead to quite large containers now whilst this tooling out there to let you kind of strip container images down Well, why put things in in the first place? Let's build the container just with the bits we need and let's set it up so you can build it as a non-root user So if you've got like multi-user development environments, perhaps Lenox system where many users have got their own shell access It means that each user can build and manage their own container images without needing to do that as route or without special privileges so that they can access the Docker API over a socket and have Docker runners route and build those container images again much smaller attack surface better security So let's talk about why we don't want to get rooted because this is fairly important You know we're being rooted being attacked security is a major issue So why would we talk about this in the context of rootless containers? I've worked in Linux operations for a long time a large part of my professional career has been helping people run systems at scale and You know in traditional Linux environments. This is mostly a solved problem You know 10 15 20 years ago. It wasn't uncommon to get an installation guide that said install this as the root user be it from a Third party vendor be it's part of an ISV application or be it from an internal development team We need to install this as root because it needs special privileges We need to install it as root because we don't understand the permission model. We need you to turn se linux off Because and there's always a reason in fact one of the worst things I ever saw many years ago was a major proprietary application stack which required In its installation guide for us to leave an x11 console open with screen lock off with the console open with the Management tools running as root that was one of their core requirements to leave a live root access x11 session running on the console Thankfully, we've moved on a lot since then But we've become quite lazy in the container ecosystem and initially all container images have to be run as root when you You have to be logged in as root to do the run and internally most of the systems assume that the services were effectively running as root Rootless containers all about containers that can be created run and managed by users without any admin access This adds a number of benefits aside from the security profile It means that multiple users can run the same containers at the same time without interfering with each other because they're living within their own user space So why podman? Well, we mentioned this earlier It's been fundamentally designed with security in mind and in particular Heavily leverages SE Linux. That's more a tax service because it's just a runtime engine and all those rootless capabilities are built in I Like the fact it integrates nicely with system D means I can set up system services that actually run now as containers and It overall it helps me do things like reduce my overall virtual machine footprint as many things in the past I might have run as a virtual machine. I can now run as a container Why should I care? And I've had people come up to me at other events I've talked in the past and even when I'm talking about why rootless matters with with customers and partners And I've said well, I'm good. You know, I have complete control about all my containers I build them all from scratch really honestly everything from scratch including the base operating system, you know You have cut that level of complete control. Also, you know, you're going to be patching them every time There is a security issue out there in the wild. You've got all that control built in all that CICD release management built in awesome But you don't run any community containers, you know, you haven't got a Helm chart or a deployment Script running somewhere that goes and sucks in some random third-party container into your ecosystem And you're not using any commercial containers. We're starting to see a large number of Commercial organizations now ship their software in the form of containers And they go well, that's okay because my container platform is so secure. Oh really so that's terrific and maybe it is But do you trust the other platforms you may be running on? Are you using? Community builds of your operating system of your container runtime Are you using a container runtime in the public cloud? Do you know what else is running alongside you in those environments? Are you sure it is as secure as you think it is? Ultimately almost all of us consume some kind of base OS that we build our containers up from Unless you're a golang user and and you're just building a pure binary So common use cases of things like Alpine and Ubuntu in the Linux and the Red Hat ecosystem We have our rel UBI images and and we now have three of those Four of those actually depending on the kind of type of image you wish to deploy And we'll show some of those off as we go through today's demo Microsoft SQL server now even ships based off our UBI image. It means that they get the Security and stability of running on top of a rel baseline image But the UBI image is great because it's freely redistributable so you can build your own Redistributable containers on top of a known enterprise ready container baseline but despite this and Paranoid So I want to know that my environment is secure as possible. So I want to be able to run things ruthless and This becomes even more important when you start looking at the number of vulnerabilities that have been out there over the last few years Just a basic vulnerability scan of all the containers inside things like Docker hub shows a huge number of critical vulnerabilities The prior to us acquiring stack rocks recently Their previous security reports and 90 percent respondents had had a security incident the latest report from this year was up that to 94% There are always new security risks and issues out there today No matter of where you're consuming Docker or Kubernetes or you contain a runtime from So Let's go ruthless. Let's avoid that pain in that headache When I'm trying to build a demo or a scenario or do something for a community talk even I Try and put myself into a degree in the mind of a customer of an end user So I want to validate the technology, but to follow through on this I also want to do it in a way that excites me. So I'll pick up something I'll try and make sure I do things in a sensible way, but I want to do something cool and fun But I try and avoid cutting corners mostly It's my lab. It's my environment. I know which corners I can cut but also I have to think about well If I was doing this for real for a customer What would they be doing? How would they be deploying this because also often some of the blueprints? I've used for myself. I've ended up using with a customer in the field And also I thought what do I need that could or should be in a container? What do I need? and Ideally using a third-party container therefore I do have that potential attack vector on my environment Now this is where it comes down to the issues We all face whether it's running our own lab or own home environment or it's running Things inside a business organization at scale. There's always existing services They're very hard to replatform or refactor. So I've got some home websites I run a track system for my subversion and get I still hack around on things like myth TV There's a bunch of things. I just have every day around my home environment But what's cool? What's shiny? What do I want to play with today home automation? And so like many people I want to play with home automation. I thought well great Well, I go and run that in some containers Right way to try it out. Don't need some new virtual machines low overhead Awesome. Let's give this a go So pub man if we're going to run rootless, what do we need now really today? Pub man 3 164 or newer depends on what's shipped as part of your favorite distro was available today 2.x plus sorted out a lot of rootless issues each release deals with more edge cases and pain points We need an additional package around the network layer to make sure we got the right network components required for running these rootless services and all the networking bits under the hood We also need to make sure we've got the right number of user namespaces Effectively each user ends up consuming additional namespaces for when they run these rootless containers And also are you running these containers as a user traditional user? Or are you running them as a system user? So if I'm running these things as a system user, then I may need to go and add some additional Sub UID and sub GID entries for that system user because they won't be Automatically created as part of user creation So confirm the version of pub man in this particular case 323 is the latest build Fairly recent, and I'm actually going to be running it for the purposes of the demo on rel 8 4 Just to prove I've got a real environment here. So there's the version of red red hat. I'm running if I do Podman version There's the details So this is a simple Relvm image that I can kickstart and recreate really really quickly on my home KVM server For the purposes of doing quick labs and demos So rootless options. So I want to first do some testing. I'll just create a dummy user called Fred Hey Fred, how you doing today? So I'm going to make sure Fred isn't running things as root and So I want to go and pull an image. I want to run an image And I want to make sure that the services inside it Are running or not running as root. So first of all that the left-hand example here is running a service Where inside the container you're running as root, but you're outside the container you're running as a non-root user Now the second side is that when you run a process inside the container I want to run it as a non-root user This is a little kind of come back to later in the talk But is it an issue many containers you get off the shelf from a container registry today are Actually built so that the services internally are running still as root That means if you run it as root on the outside and you're running things as root on the inside You're actually creating a higher attack surface and potential risk Ideally modern containers should be getting defined now so that the services within them are still running as non-root users Much as you would do on a traditional Linux platform So let's take a quick look at this inside our test environment here, so if I do SU-minus Fred and I do Pogman version There we go. We're good Now if I do Pogman images You can see I've already pulled down our red hat UBI base image, and it's a nice small image. It's just you know 38 mag if I run ID There's our for user Fred and his context and his details And I now go and run my image and do ID I'm now running internally as root Okay So let's rerun our command now again This time I'm going to run as the user nobody Now if I do ID, I'm running as the user nobody within our container image So if I do who am I I am nobody I'm just to prove the big difference between being inside this container being an outside in the base OS if I do LS user bin You can see that's the number of commands. I've got available today. This is the math software installed I jump out of this in LS in a conventional Relatively small image. I've got a heck of a lot more things installed So a nice small image and up and running as And I'm able to run that as a non-user. So that's a really nice simple test That's kind of a baseline to make sure you can actually do things rootless If I do pod man PS Much as you would run Docker PS. You can see I've got no running images and if I actually do PS minus a You can see those the images. I was running earlier. They've now exited So as we said pod man's interchangeable as With Docker as a container engine run sees actually the OCI compatible container runtime that it initiates So pod man just access the overlay for that OCI compatible runtime And we've just gone through a few of those commands now just looking what images we've got and Making sure that the containers were running have now stopped So there's a few standard basic commands much as you'd expect running Docker So now we're going to move on to the The core of the talk where we're going to play With home assistant Now I just need to give a really quick shout out to an old friend of mine Chris smart. I Find every time I want to go and play with something some new technology Chris is probably playing with it already And there's already kicking the tires and and worked out a few edge cases. So when I first looked at this Close to two years ago Chris has already found a few issues particularly with rootless containers He was playing with fedora I'm going to be doing it on top of rail and that's where I started hit a few things that he hadn't seen Which is useful because I ended up having to been able to help our engineering team resolve a few bugs. So Let's let's create the environment. So what we do we'll set up an initial user for running home assistant called house We're gonna make this a system user running in our lab rather than home Cars were running it as a system type user. I need to go and create those additional UIDs So once they're in place We now need to create the config direct user to the right SE Linux permissions So they need a few extra SE Linux contexts Applied and if I act and then we need to expose the service. I want to be able to connect to this service and just to prove Right now. I can't connect to the service because it's not running. There's no service running on that poor on my virtual machine pod 2 So let's jump into the Environment here and Become my user has Now here. I've already created Those Directors the config and the SSL and I'm actually gonna head of myself because I've already got my mosquito Direct you which I'll be coming back to a little later in the top But those environments things have already been set up in fact to help Speed things up. I've actually previously pulled down The home assistant images as well as my mosquito images. So they're already pre-cached as you can see I updated those caches just shortly before Recording this talk So let's do some initial testing. I'm now the user. Let's actually run our home assistant image Now this summit I highly recommend is before you start getting to carried away do a bit of basic testing So I'm going to run that image There's an idea if I do pod man PS I can see the image is running if I do pod man logs minus F has I Can now see the logs of the container image coming up awesome Let's go back to our bread browser here and I should be able to hit refresh and now we've got we can This is a brand new environment. I can now go in and configure it from from scratch That's Steve Steve really hard to remember password and No, I don't live in Amsterdam or pretend and I'm gonna say I'm in NZDX. I'm based in Auckland, New Zealand So I can now go through and finish my configuration of home assistant But all that data is actually being persisted in those directories outside of the container So if I do See the container running I can now stop and remove the container Roof the runtime image That's now gone And if I jump back here It's giving me a warning because the connection has been lost and that's gone away that easy, but all that data Lives under here So as I configure it the data is separated from the runtime. This makes things like back up so much simpler I can just easily shut down the container back up a very very small set of directories and Recreating it is just as simple as starting the container back up again But what I want to do now is enable it as a service. I want to have it always available. So when I my Virtual machine restarts it goes and brings the Service back online. So as the root user I Can run the following oops getting ahead of myself. So I can create a system d service and Make sure it's running exactly the same start command But I make sure it runs as the user and group has now one thing about system d It's also a bit of a swiss army knife. So there's several ways of doing this This is just one example, but I'm telling it that I want this to run whenever it's running as a multi-user target When I do a reload It's going to make sure it does a stop and I remove so it removes the named Environment before it brings it back up again. And so there's my Kind of system d service definition for house now next part of this Turns out for some of my devices. I need an MQTT broker One reason is I'm playing around with a number of smart plug type devices, so I've got a number of these simple smart plugs That can be easily reflashed with TAS motor which is a open source Firmware replacement for a bunch of commonly available plugs now. Where to warning be careful a number of the Toya TYA based Devices now have a much newer firmware that's far harder to flash I was lucky. I've got a few of the older devices and so I was able to very easily Do a Wi-Fi based over-the-air Flash of these into an open source firmware With newer versions of home assistant. There's ways of configuring them without MQTT But I need an MQTT broker for some other things. I'm running as well Now out there today again. There's a really nice off-the-shelf broker image mosquito And I can Gonna be fairly lazy here with a set occasion. We take some shortcuts I should really run it under a different user, but it's so closely coupled with my home assistant environment. I'm gonna run it as the same user so Let's jump in Now there was one gotcha though Now when I first started using this the default image off the shelf behave the way I wanted That's now been changed and revised so the latest image in order to have it Listen on the correct ports and behave the way I want. I actually no need to provide a config. So I Now actually have this mosquito directory and in it I have a simple config file It simply says turn on the list on the port I want and allow a non-limit access Because the moment for these plugs the way they're set up. They actually live on their own separate VLAN their own separate Wi-Fi network and I'm not bringing any extra security on them at present over time If I want to do proper security handling for MQTT, then I can update the mosquito config But the moment is pretty simple kind of lazy. I know but I have at least isolated the network so Let's have a look Back in my environment over here So there's my config file and then this is sufficient to do a test run of mosquito Running and then if I fire at one of my devices and you know Turn the power on and off. We'll see the messages come through on the messaging bus here. And then if I do Podman ps not running podman ps minus so you can see Where we're good? It's all gone away now like Has I wanted to run as a system d service and have it come back up when I restart the environment So this is all I require now for most part. This is pretty good behaves reasonably stably, but I'll come to a couple of troubleshooting tips Towards the end of the session So for the most part this works. Well, I have a separate virtual machine that has my live environment and when it comes up Home assistant starts MQTT starts all my devices starts synchronizing and everything behaves nicely and It has greatly simplified things like backup and recovery But I want to run other things so one of the one I I used to run for a while And then I got a new TV and thought it'd be really nice to have his mini DLNA in a container But I found it has some issues trying to do Attach NFS base volumes when you're running as a non-root user So for the peps has shown this off. I'm going to drop back into my my one of my test users But here's some examples of how you set up volumes with podman so with podman you can create Volumes you want to attach into your containers at runtime. These can be remote NFS endpoints. Now if I run these commands as root everything behaves nicely Doesn't behave so well when I'm running as a non-root user Now I have flagged this to some of the podman development team and it is something that working on the background So hopefully we'll have a solution soon a simple way around this would be Use something like auto FS or have it in the FS tab on the container host That these volumes are being mounted somewhere that the container can access But I quite like the way we should be able to closely couple The the whole environment so the volumes are associated with the image So let's just jump back to our test user from earlier Mr. Fred if I do podman volume List you can see there's my volumes audio and video role now now if I do I'm not going to actually use the mini DLNA container I'm just going to use my UBI container again because it's really nice for simple debugging All I want to do is attach that video vol volume as month video vol inside my UBI container and Now I get an access issue So hopefully that'll be fixed soon. This thing doesn't happen when I run it as root So that I said there are other workarounds, but at the moment does show you there are some limitations and issues when you're trying to run services as An honorary user so your mileage may vary But try and change the way your developers work and the way your system teams work and have them make sure that they've got the Containers that they're developing and producing are rootless friendly So where are we today? Frustrating. Oh, I had a world of pain at the beginning It wasn't entirely fully functional for some of the workloads. I was looking at On rel 8-1 I had some real weird memory issues but I managed to get an early engineering build of Pub man ahead of the GF relate to that resolve the issues and it's been great ever since I wouldn't have had those issues on Fedora for when running upstream pub man It was just that I'm working with what my customers would work with the one that we ship has passed the distro and You know pub man's been through quite a lot of major revisions recently It's the point where we've now shipping pub man 3 is part of rel 8-4 And as I've said, I've got these issues with NFS volume management bad As I said, not all containers are ready to be rootless and it isn't easy to identify which ones will or won't work You kind of got to go and kick the tires yourself And often you're still running as root inside the containers and if you're not running as route outside So you do get a layer of security, but it's not where I end up ideally want to be crash consistency issues I had a power outage and the host that was running my Virtual machine that runs the pods went down and then didn't come back up again very cleanly The virtual machine did the pods didn't I had to do a bit of cleanup That's improved with each release of pod man If you do get things in a bad state, I've got a couple of troubleshooting tips later on The most common one is you just sometimes need particularly with system D services is do a system D stop Make sure that there's no lingering redundant parts left lying around then bring everything back up again Where I like though is Management day-to-day management is easy. It's very easy to update things very easy to back up because the configuration and data are very very Separated from the actual environment. I'm not backing up an entire operating system Every time I want to do a backup. I just have to back up data And because the way containers are tagged I can potentially roll back to a much older release of a service that I'm running. I Feel quite safe doing this with some community containers So let's what else have we got troubleshooting Overall most troubleshoots very similar to Docker. You can look for your old dead images You can look at the system logs. We showed earlier for each container as they're running Start stop remove image System prune is a really handy command. I'll show you that shortly If you are using system D avoid starting and stopping containers manually if you flick between system D and then pod man start pod man stop Things can get a little out of sync Sometimes you just need to go and do a tidy up most of the time just use the system CTL command start and stop the Services as and when you need them Upgrading workloads You can pre-cache the upgrade so I can go and pull the new image While the service is still running and then all I need to do is restart the service and it will now find the new image It's already been downloaded That's great If you don't want your service to be down for five or ten minutes while it's trying to find The latest container image and sync it particularly if you're dealing with really big container images Upgrading pod man itself This is where things can get interesting when I went to first do this updated version of the talk I was running a much older version of pod man and I thought oh just upgrade and everything's good and nothing quite worked Pod man system is a great little maintenance command. There's a few useful things here prune Reset or two in particular If you perform the major upgrade Pod man system migrate Will do a lot of the tidy up tasks you need to make sure the environments configure correctly remove redundant configs and An upgrade a few things so that it's aligned with the version of RunC pod man, etc. That's now on your host If you are still experiencing issues pod man system reset will go a little bit further and clean out a bunch of redundant data Now this is pod man data not your application data because that's living outside of your pods. It's still safe so I've yet to experience any major problems running these commands Pod man maintenance or pod man system prune is really handy Anyone who runs a lot of containers or is doing a lot of container image building you end up with a lot of redundant containers and redundant Environment information, so this is a great way of Tidying things up and dealing with damp dangling images and dangling builds things you no longer need on your system I'll just show this one off briefly if I become my As user again, I do pod man images. You'll see I've got some dangling images This is because I upgraded the version of home assistant I had earlier and the taggings change So the newer images are now tagged as latest Whereas if I now run pod man system prune Tut to go ahead It'll now go and do a cleanup of my users environment, and it's now reclaimed a chunk of space From my environment and I did a few other things up for me. So again, pod man is great at just General maintenance very very useful a few references for you There's some great links out there on how to get started with pod man Podman basics what's actually happening behind the scenes when you're running rootless containers with pod man as a video series We've been pushing out recently One I do want to raise in particular is our pod man catacoda tutorial This is an awesome way to get your hands dirty with pod man today without actually installing anything yourself So if I jump back to my web browser here and just jump over a tab this environment Allows you to deploy containers And manage them with pod man without ever having to install anything yourself So this is a lab hosted environment operated by Red Hat You don't need any kind of logins to come and play with this and you can go get your hands dirty today and try out Pod man, so take a look at this as a bunch of other container centric labs that we have available today And that you can use without any charge. They're just available today through the red hat labs environment So please jump in have a play try it out So thank you all so much for your time and for joining us today. Hopefully I've got you interested in playing with pod man Please reach out if you have any queries or questions. You can find me on Twitter Or you can find me on my people page at Red Hat where a copy of this talk And all the slides will be made available shortly And also let me know if you find a way to hack them all modern versions of these plugs I do keep an eye on the various TasMoto wikis and The github site to see if there's been any recent changes would be great to find that we can continue to Put open source firmware on proprietary hardware So once again, thank you all for your time and please let me know if you've got any questions