 Hi. Good morning, everyone. How are we doing this fine Friday morning? Bright and early. Great. Woo! OK. So I'm Rubeh Shimunani. I'm a software engineer at Red Hat on the OpenShift on-time steam, working mainly with Cryoportman, Belda, and Scopio, so all the low-level container tools that we use nowadays. And this is? Yep, I'm Sally. I'm also an engineer. I work on the OpenShift team. I work on some different OpenShift components, like API server and authentication. I use Podman in these tools every day, but I'm actually not writing the code for them. So all the hard questions are just different to receive. Yeah, it's my, when I started at Red Hat though, I was on the containers team, and things are way different. Back when I started, we were submitting pull requests to upstream Docker. Pretty much the Docker container exploded on the scene. Everybody started adopting it, and that was the only tool available. We at Red Hat were carrying a bunch of different patches on Docker in our own branch. And that's what we were shipping, and things that we thought were important. We've got security and other features that Docker just hadn't accepted yet, or PRs were closed, or whatever. So you can go back in history, look at a blog that Dan wrote. He lists all of the patches that we had, and the PRs, and such. It's an interesting read, but anyways. Around that time, another company, CoreOS, was developing a container tool on Rocket. And so lots of other companies were getting interested and wanting to develop tools. And it was going to be very difficult without a set of standards around what is a container. And so the OCI was formed. Some companies like CoreOS, Red Hat, Docker, Google, IBM, all the big names, they got together and came up with some open industry standards for what is a container image, and what is a container runtime. This was really important in propelling things forward, because now any container image can run on any runtime, as long as you adhere to OCI. So what OCI is not, and if you were at Dan's talk, you saw this yesterday, we stole it from him. OCI is not a movement started by Dan to get open container laws changed and be able to drink beer on the streets or in your car. So just so you know, somebody stopped him, he was on the street with that jacket, and someone stopped him and was like, are you really advocating for open container laws to be changed? So yeah, we just want to nip that in the bud. So before we go on, we want to just be all on the same page. It seems like most of you are, but I prepared this, so I'm going to say it. Containers, they are ordinary Linux processes running, and they are constrained, they're isolated, and they have extra security. Processes are constrained using Linux C groups. C groups control how much of a system resource like CPU, memory, bandwidth, a process can consume on your system. Namespaces, Linux namespaces give you an isolated view of your system with regards to a resource. So for example, if I'm in a container, I'm in a pid namespace. If I'm inside that container and I have a shell, if I ran the ps command, I would only see the processes in the pid namespace. I would have no idea. I would have no access to. I couldn't view any processes running on the host. Same thing with mountain namespace. If I'm in, say, an Ubuntu container, I have the root file system of Ubuntu in the mountain namespace, and that gives me the feeling that I'm on a Ubuntu system. That's the idea with Linux namespaces. And then extra security, I think I already said that. So in order to have a running container, you need a container image. And that is nothing special or magic. It's a tar ball made up of layers. The base layer is usually a root file system, like a Fedora or Ubuntu or Alpine or whatever. And a JSON file description of certain things like environment variables, entry point commands, things like that are put in the JSON file description. And then layer upon layer, if I had, say, I wanted an Apache Fedora image, I would install the Apache package and commit it into a new layer and then update it with a JSON file. Tarle that up. That is a container image. In order to run a container image, you need a container engine that knows how to launch the container runtime. So in a container engine is a program that can take the container image, reassemble that tar ball, extract onto your local system. And it uses a copy on write file system for that. Because, say I have 10 Fedora images. I want to run 10 Fedora containers, 10 Fedora images. I only need one copy of that base layer. I don't need 10 copies of the base layer. That's copy on write allows for sharing of layers. And so the container engine, like Podman, Cryo, containerD, takes the JSON description of the image and takes any user input. Like when you pass the run command, you might pass a port or interactive or TTY. It takes all of that information and combines it into a new JSON file that it uses then to launch the runtime. 99% of the containers we've used use the runtime of RunC. Docker, Cryo, Podman, we all use RunC. But any OCI runtime can be used. There's others like C run and G visor, Cata containers. There's others. So now we all know all of that and we can move on. Yeah. So Sally gives an awesome introduction to what containers are. So now we know that the continuous space can actually be broken down into four different sets of action. These are building container images, running and testing containers locally, storing and sharing your container images, and finally running them in a production environment. Now, each of these actions have very different security requirements. For example, you need way more privileges to build container images than you need to run container the production cluster. So imagine what would happen if we had all these actions happening in a monolithic tool. We would obviously end up with the least common denominator when it comes to security, which is pretty terrible sacrificing security just so you have all the functionality in one place. So following the meme on this slide here, our goal here is to deliver better and more secure tools for your container workload. Hence, we decided to follow the UNIX philosophy, which states that you should design tools to do one thing, to do it well, and to work well with other programs. As you can see here, the UNIX founders are really happy that we decided to follow their awesome philosophy. So we had four container tools were born out of that. Builda, the name says it all, for building container images. Podman for running and developing containers and pods locally. Scopio for sharing and storing these container images, and finally Cryo for running your containers in a production cluster such as Kubernetes or OpenShift. And all these tools, including OpenShift, come together to form the amazing container commandos. They're the superheroes of the container world and are here to save us all. So over the last year, the communities surrounding the four tools have really grown. And that's why we wanted to come back and share more with you about the tools. We're going to go through some of the new features that have been added over the past year. The first one is Builda. Builda is the tool for building container images. When you build container images, you want to build them securely. The first thing that you would think about is only put in your image what you need to run your program. So building minimal images, Builda was designed to do that easily. With all of our tools, there is no demon. That's really important. Because of that, you can run all of our tools without root. And our image builds can be truly containerized. I'm getting ahead of myself. Because over the past year, year and a half even, we've been working on updating OpenShift from OpenShift 3 to 4. And it was released this year. And Builda is the tool used in OpenShift for building images. And we use Builda containerized in OpenShift. And because there is no demon, there's no information being leaked from the Docker soccer file to the container when we're building images. It's really made our image builds in OpenShift much more secure. Also, Builda is the default image build tool in REL8 over the past year that happened to. And we have some awesome demos for you all. They're all live. Hopefully the one we just changed up works. We expect it. If not, sorry. So again, Builda makes building minimal images very easy. And it did work. That's good. Builda from scratch, it sets up the scaffolding of what is a container. It sets up the sea groups and namespaces. But there's nothing in it. It will spit out a working container. From there, you can set a mount point to that working container and use your host system package manager to install whatever you want in your container image rather than having to have the package manager inside the image. Now you can just use your host system and move or copy or install anything you need. We've installed one single package that we needed, which was Busybox. And now that we have that, we can remove the mount point and commit that working container with Builda commit to a new image. So we called the image minimal image. So now when we run the image, you can see that it's completely locked down. You can't ping. There's no Python. All that's in that image is Busybox. That's the idea with minimal images. You just shrink your attack surface. The more that's in your image, the more it can go wrong. So do you want to talk about? Yeah. So another feature that we have in Builda is building with Dockerfile.in. So there have been cases where you usually want to build multiple images for different distros, but you basically want to do the same thing in each image. So you'll have like 10 lines that exactly the same across like three, four different Dockerfiles. So instead of copying and pasting these lines across all the four Dockerfiles, you can just create one Dockerfile, the bottom one that's the Dockerfile, the lower case. And you can have all the common lines in there. We use the CPP processor to do this. So you can use the pound if, that, pound include, pound define functionality of that. And so as you can see here, I have two Dockerfiles, Fedora.in and Ubuntu.in. So in Fedora, I want to start from a Fedora-based image. And I just want to like echo I'm Fedora on it. But for the Ubuntu, when I'm building for an Ubuntu, I want to install HTTPD. So as you can see here, I have done a pound include Dockerfile in both the Fedora.in and Ubuntu.in files. So I don't have to copy and paste these common lines across. Another cool feature is that you can also do an FL statement. So you're like, if you are doing Fedora one, you can do DNF install versus if you're doing app Ubuntu, you can tell it to app get install instead. So I'm just going to quickly build that Ubuntu image. And as you can see, it built and we can find my image here. I forgot to clean up our images. Yeah, it's called my Ubuntu, it's right there. And then same with Fedora. I didn't do DNF install because it was going to set and take forever to run here. So I just made it echo, I'm Fedora in there. And once this is done, hurry up. Something went wrong, but okay. But yeah, I basically built that image. So it's just more of enhancing user experience, making it easier for developers. You don't have to sit and copy and paste things around. You just have one file with the common lines and all. So that's Dockerfile.in for you. It started the next demo. Yeah, it started the next demo. Okay, so this demo, so when it comes to processes, there's always been this long-going battle between speed versus security. Usually, and the same applies for when you're building container images. So usually when you want to maximize your security for your build process, you're going to be sacrificing on speed. And the same is true for vice versa. So keeping that in mind, we actually designed Build to allow users to explore with and discover what balance works for them best. Some people prefer fully locked down, even if it's low, some people prefer really fast, even if it's completely unsafe. And there's another technique of keeping good balance of safety and speed. And so when we incorporated Build and OpenShift, that's a constant discussion going on with the balance between performance and security. Security. Yeah, so the three scenarios here, and the three ways you can do build processes in Builder. So the first one that already ran, I basically have appointment command and I'm running Builder into the container. I'm not even building an image. I'm just trying to pull the UBI8 image. And as you can see, it took 14 seconds to run that. And in this case, it's completely secure. Acelionics is enabled. I'm not mounting any container storage volume into the container. The process is completely isolated between the host and the container. So this is your most locked down method and this is your slowest method. The next one is your fastest method, which is the most non-secure method. And this one, as you can see, I'm volume mounting the container storage on the host into the container. So now if the host container engine has already pulled down an image, the container already has access to it now and doesn't have to reach out to the registry to pull it down. So you're saving time on that. And then since I'm giving my container right, I need to give my container right access to that path. I have to disable acelionics. That's where the security op label equals disable flag is there. So this is completely unsecure. It's like it's not blocking anything here. And, but it's the fastest. Like, and if this is what you want, you prefer faster of security. You do you. It's up to you. We've given you the option to do that. And then the last scenario is a good balance between security and speed. So we have something called additional stores and builder where you can tell builder that this is a read-only store. You can pull images that already pulled down to that store. You don't give it right access to the store. So we're doing that over here. We're volume mounting the container storage on the host to our lip shed, which is like additional store. And we're setting as read-only. So the container processes do not have any permissions to write to that path. And it cannot change any content on the host container storage. So future container runs will definitely not be affected. So you're basically like 90% locked down in the scenario. But then you're still giving it the database from the host in the image database from the host into the container. So you're leaking that much information in. And because of that, it's already cached and you don't have to reach out to the registry again to pull it down. So as you can see that actually ends up being faster. And the second scenario is already in the cache. Yeah, so it's like about two seconds to run that versus the 14 seconds in your most secure method. Upstairs is the one we practiced. It took like 23 seconds for the top. Yeah, so I mean this one's on the internet speed as well. But yeah, so these are the three different ways that you can balance your security and your speed and choose on what is more important to you when building container images back to slides. Oh cool, we're back to Podman. So Podman actually, I would say, has the most new features in it. Yeah, so like once I'm done building my container image, I like to like play around with it locally. And for that, Podman is the great all-in-one container CLI tool. You can do everything from building container images to running and containers and pods. Podman also uses build down with the hood. So time back to your next philosophy. Create tools that do one thing and work well together. And Podman replicates the whole Docker CLI and adds much more on top of it as well. Same thing as a theme, as Daniel also said, hashtag no big fat demons. We have no demons running for Podman. You can run without truth this way. So security, focus on our security there. Podman version 1.0 was released earlier this year in January and it's now the default container CLI tool in well eight. It's fully supported by Red Hat and we no longer have Docker. Over the past year, we have added a bunch of new features to Podman, specifically focusing on user experience enhancement. These are some of the features that have been added. Podman pod, as the command says, it's the full CLI tool to create your pods, play with your pods, do whatever you want, sort of replicating how pods and containers work in the Kubernetes or OpenShift cluster. The other command we added is checkpoint and restore. So let's say you're running a bunch of containers that are doing some database processing and everything, but all of a sudden you need to reboot your machine and you don't want to lose the progress. So with checkpoint, you can checkpoint all these containers. Podman will save its state at that point and you can reboot your machine easily. When you restore it back, it will pick up exactly from that point. So you don't lose anything and you don't lose any, you don't have to start from scratch again, you have to wait again for all the whole database processes to happen. So pretty cool for people who like to use containers and databases together. And the next feature is my favorite one out of all of that. It's Podman Generate Cube and Podman Generate System D. So usually when I'm playing with my pods and containers with Podman, it's super easy to launch containers and pods at the point where it was Podman run, some image, some command, right? Add in some two or three flags there, do what you want. And once I spent some time perfecting what I wanted to do, then I'm thinking like, okay, now I have to make this run in a Kubernetes cluster and an OpenShift cluster. I have to go and now write YAML and JSON files for that to happen. I have to go like kind, pod, name, my pod, yada, yada, yada. Here online YAML checker. Yeah. And I usually don't remember exactly how the YAMLs go. Like I don't know about you guys, but I don't remember the other part of my hands. I'm usually Googling, how do I do this and how do I do that? But Podman Generate, you can pass in a pod, you can pass in a container, and it will literally go through it and create the YAML file you need to run it in a Kubernetes cluster. Then you just pick that, plug it in, and your pods and containers are launched as they were with Podman. You can use kubectl or oc if you're running OpenShift to create it. Yeah, so very enough to feature for developers. Same thing for systemd, you can also join in systemd files that way. With systemd files and containers, it's a way to launch a container, say on startup. We don't have a DMN for our container, so we use what everybody uses for everything else, that's systemd. So we don't have something like Podman autostart like Docker does because we just don't have a DMN. Yeah, and then another one is Podman and Sheriff. We all have been using Builda. I'm sure you've used Builda and Sheriff before. It basically just puts you in a username space, replicates a container environment, and you can do stuff in that. And now, so this is a pretty big thing. Podman is going to start using C Groups V2 in the near future. It'll be the only container runtime supporting that, and that shows our dedication to further improving technology tools in the container realm. And finally, a big feature that's coming along pretty well, and our internet actually has been working on it all summer, is Podman Remote, which lets you use Podman on a Mac or Windows machine. And we will talk about that a bit later in the talk. And yeah, more demos. So this demo we've showed before, and actually Dan, we share it with Dan too, so he showed it yesterday, but it's the Podman runs with a true fork exec model, and rather than the client server model. That's important when auditing a system for knowing who's running what on your system, fork exec is important. So on a Linux system, if you can't proc, self-log in UID, that will give you the UID of the person that's currently logged into the system. It does not change whether you go into a container, you gain, you know, you use sudo, you're always going to be logged as that user. So with the fork exec model, if I run a Podman run command and I go inside the container and look at the login UID file, you can see it carries through and it's 1000, as you'd expect. If I do the same exact command with Docker, I'll get something different. Oh, sorry. And this signifies that it's a user who has never logged into the system. It translates to negative one or unsigned 32 bit in, something like that. You'll always get the same value when you're running Docker containers and that's because that's the Docker daemon. So showing this another way, if I set up some audit logs for a sensitive file like Etsy shadow, if I have the ability to run Podman as sudo, launch a privileged container, there's nothing stopping me from mounting the hosts and making a change to Etsy shadow. But if I go into the audit logs that I've set up, you can see that Urbishi, I've taken over Urbishi, so actually this, I'm still not going to get caught, but Urbishi is messing around with Etsy shadow. If I do the same thing with Docker, you can see now that in the audit logs it says the unset user has been making changes to Etsy shadow. So that's a really cool way of showing how important the fork exec is when auditing and knowing who's running what. All right, so as I said, we added Podman Pod in the last year, just a way of creating pods. I can do Podman Pod create and that will create a pod for me. When I do a pod list, you can see this is the pod that was created the first one. It's the one that's created two seconds ago. I can see the- And it has one container in there. It has one container. You can see the container is running inside it and has an infra container which basically sets up all your networking and everything you need for your pod and containers to run. Yeah, so every time you run Podman Pod, that one infra container will be started without, you don't have to start it and that's what sets up the pod namespaces and such. So now I can create a container inside the pod by doing a Podman run and giving the pod flag the pod ID. So I'm creating an alpine container. So she's starting just a simple alpine container inside that pod. Yeah, and then when I do a Podman PS, I pod PS, I can see if you remember my state was created up here and now that I have another container apart from the infra container running inside it, now it's showing that my pod is actually running. This is probably gonna fail, yeah. Oh yeah, cause you just have the flags. So I can do it here. So if I do Podman PS-8-pod, that will show me containers that are running inside Pod right now and this is the one that I just ran. Yeah, so that's it and you can, if you stop a pod, it will stop all the containers cleared up for you, basically like how pods and containers work in clusters. So I would encourage you to go look, run the latest Podman and do Podman Pod help and you can see the help menu and what's available for Podman Pod. There's also a command Podman container. If you're using Checkpoint and Restore, that's a Podman container command. So if you do Podman container help, you'll see all of the options there and then there's also just the Podman help menu. So Podman container is used with- You just wanna do this? Yeah, I'll do it, yeah. Yes. Podman container is used when you're running Podman Cube generate, Podman generate cube. So here I have an OpenShift cluster. Can you see that off to the side on the right? Yeah, you see enough. I have my OpenShift cluster running in the cloud. It's running in AWS and I have my, I'm going to show you Podman generate cube. So here, I'm gonna make it a little smaller. Can you guys see that still? Okay, so I'm running the command. Well, first here's the Docker file that I built for my Hello OpenShift image that I want to run in OpenShift. It's just an alpine image and I have a simple binary that spits out Hello OpenShift, simple web server. So Podman supports these run labels and I passed a run label to my Docker file. So what that does is every time I run this image, this command will run, the Podman run, give it the name Hello, set up the port and run it in the background. So now I can pass Podman, I'm gonna set up a container, run label and I'm gonna give it that image. And that pod, that container is running in the background and you can see if I look at the logs, they're just serving there. You can see it's running. Now, Podman generate cube, here's the help menu for it. You pass it a running container and it will spit out a YAML file that you can upload to OpenShift or Kubernetes. So here I'm going to pass Podman generate cube with my running container and here's the YAML. That looks good, I've checked it out. So I'm going to save that to a local file. And here I want to make sure I'm in the right place. You're in the right project. Oh, I am? Okay, so I'm in a Hello OpenShift project and I'm gonna show you my pods, which I have no pods, but I can run OC create and a pod has spit up, a spun off, sorry. And I want to make sure it's running before I, yep, it's running, there we go. It's a really simple pod, so there's nothing that's gonna block it. You can describe it. I'm going to use just some simple OC commands. So if I run OC expose pod, it will set up a service. You could just create a service if you know the YAML for service, but OpenShift is the distribution of Kubernetes that makes Kubernetes really user friendly. So we've added commands like OC expose because we know that if you have a pod and you're gonna want a service. So here, oh, I wanted to show my service. I missed it, but a service has popped up and I'm not gonna miss this time because now once I have a service, I can run OC expose again and look at, when you run OC expose on a service, which I will do, you'll get a route. And there my route popped up and now I'm showing you right and left. I'm showing you command line managing OpenShift and the web console managing OpenShift. I'm just doing the same thing at both. So here I can curl that route and I should see hello OpenShift. Yay, or I can go over here and show it as well. There's hello OpenShift. So that's podman generate cube is something that the podman team has been working on for the past year. It's new, they'd really love it if you had some new stuff to add to it. So submit pull requests, RFEs, whatever. Still up and coming. And you saw how easy that was just to convert that account generate which you built, convert it to a YAML file and run with OC. So, yeah. And then, all right, so that's for podman. Oh, good, we're done with podman, yeah. Now we're moving on to Scopio. So Scopio is a pretty simple and really nice tool that you use to share your container images. The biggest feature about this is that you can easily move your container images from registry A to registry B without even having to download it or from any container storage to another container storage. And you can also inspect remote images so it goes and grabs the JSON that gives you all the information of what the image is. So the digest, the repo tags, who wrote the author when it was created, et cetera. So you know exactly what you're gonna download. Same thing as podman buildup, no big five demons. We have no demon running here. And as podman buildup, it's also the other default container tool and rel8. So Scopio was the first container commando. I was trying to make a joke about, you know, Captain America being the first Avenger. Scopio was the first one to break away and start the whole container commander revolution. So she's very important. So quick, quick demo. This is my favorite meme, by the way. My creating containers to cloud. So pretty quick demo here on Scopio. I'm just, if you want to, let's say, move your images, move an image from your storage that's used by Docker to the storage used by podman on cryo and Scopio. It's a pretty simple command Scopio copy. Docker demon is just telling Scopio that pick it up from the Docker storage and our storage is called container storage. So move it there. So I'm going to move that over. Probably already had it, so skipped over it. We also made our pull and push much better actually. It's much faster now from last year. Parallel. Yeah. Yeah, so you'll see Ubuntu demo right here. And I was able to easily move over. We plan to enhance Scopio a bit more where you can easily like mirror registries. You just do Scopio sync and it will copy over all the images from one repo in a registry to another. And just to make, as we said, our goal is for a user experience enhancement. And just to add, I use these tools every day, but I usually am running like build a build commands. And so I use build a push to push up to my query registry. Build a push actually uses the same library as Scopio as container slash images. And so again, all these tools just work well together. That's the goal. That's the design of it. Also, Scopio is actually pretty underrated. It's actually, it's a very simple and nice tool. So if you're running in a classroom, you're running a script and all you want to do is move container images around and stuff like that. Then you don't actually have to install Podman because Podman gives you so much more. You can just have a small simple tool, Scopio, to do all that for you. And that's actually what we use when we do our missions for OpenShift and all. So if you ever go to see the installer file, it looks like it has a bunch of Scopio copies, Scopio moves, Scopio does and all. So it's actually a very, it's a really good tool and it's pretty underrated. All right. So now we're up to Cryo. And if any of you were at KubeCon in Barcelona or watched the keynotes at KubeCon in Barcelona, you would have seen Urvishi. She was up at the keynote announcing that Cryo has joined CNCF. So at this point, I'm just gonna step aside and let Urvishi talk about Cryo. She's also a Cryo maintainer. All right, so after you've built your container images and you've tested it, push it to your registry, the next step now is to run it in the production cluster. And that's where Cryo comes in. Cryo stands for container runtime interface OCI. What this means is Cryo's OCI compliant. It follows the OCI embed spec and the runtime spec. You can plug in any OCI compliant runtime to it, like RunC, Cata, GVizor. So we have a CRUN now, which is the C implementation of RunC and all that. So what Cryo does, it's the interface between the Kubernetes CRI API. So it has a lightweight demand because it needs to be able to talk to the Kubernetes API. And then it tells the container runtime, so like RunC and all like, oh, I need to launch. So basically if Kubernetes is like, I wanna launch a Fedora container. So Cryo will go up to the registry, pull it down if it's not already locally stored on disk. And then it will tell RunC, here's the image, these are all the settings I want to do and just launch this container for me. So that's what Cryo does, it's the interface. As Sally mentioned, we joined the CNCF earlier this year as an incubator project. And Cryo's the only runtime now in OpenShift for the next clusters. So we're fully out there live. Cryo's main focus is to make running containers in production as boring and as secure as possible. Focusing on our security, we have the users can easily enable read-only mode with what this means is that all your processes and your containers will be running in read-only mode. They won't have permissions to write to any path in the container except of what is volume mounted in. So if you ever get attacked and someone tries to install something in it, they won't be able to because it's all read-only. Another thing is Cryo's FIPS mode compliant. It's one of the only container tools that does that. So if you're running on a FIPS compliant machine and you launch a container, the Cryo can detect that it was in FIPS mode. So it would know, it would be able to block the crypto algorithms that are not allowed by FIPS mode. And one of the biggest features that got into Cryo and actually all our other tools as well is registry mirroring. What this means is that if you have registry A and B and B is a mirror of A, it has all the contents A has. And you tell Cryo, I want to launch a container from this image from registry A, but for some reason A is down or can't be reached. Cryo will know to fall back to B and automatically pull it from B and continue your process. This is especially useful when you're in disconnected environment where you don't want to be connected to the internet and you have a local registry. And when you say Cryo is for running in production, it's really for running in Kubernetes. Cryo was designed built for only running in Kubernetes. And because of that, so we actually walk in lockstep with the Kubernetes version. So if you're running Kubernetes 113, you know you have to run Cryo 113. You don't have to go look up a map like I'm 114 and you have to run Cryo 0.10 or something. So you always know exactly which version you need. Yeah. And last demo is, so. Is this a mirroring? Yeah, this is a mirroring. So I'm going to demo how the mirroring works. Okay. So I have a local registry running right now in my VM. I'm going to start my mirror script. Okay. So I'm going to use scopio to copy an image I have when my Docker have repository to my local registry. And I'm going to do that pretty quickly. See how easy that was. And then I'm going to get the digest. I knew that was going to fail, but I already have the digest. So, so with mirroring, it only works when you're pulling by digest just because digest always guarantees that you're going to get the exact same image regardless of which registry it's on. It makes, the digest will be the same of all your content and your manifest and everything are exactly the same. So, the shot, yeah. So this is what, so then we have a registry.conf file where you can set up the mirroring rules. As you can see here, I have set up Docker.io slash humanani. So it happens on a repository level, not a registry level. You can do registry level as well, but we prefer repository level. And then my mirror, I set it up to that localhost 5000 my repo mirrors Docker.io slash humanani. And what I'm going to do now, so I'm going to block my VM from accessing the internet. And then if I run thing, you will see it's not being able to access the internet just to prove to you that I'm no longer talking to the internet. It's 100% packet loss. I'm going to do private images. I don't have that image yet there. You will see it because of your long digest one. And now I'm going to pull it. And as you see, so even though it shows, it pulls from Docker.io humanani, but it doesn't. I have no access to the internet. It can't reach out to the Docker hub. And that proves that it actually pulled from my local registry. And now I have to put my images. You see that long, this image here with the long shot. Yeah, I just got that in. And that's how my mirroring works. So now I can just destroy that it works. I'm going to run a container with a pod with cryo, run a container, start it up, and my container is up and running, as I would if I had just pulled the image from Docker. So yeah, so now imagine like, in an OpenShift cluster, imagine you are in a disconnect environment. Some users like have to be disconnected just because of security reasons. They don't want to be connected to the internet. And having this, like with cryo, you're giving OpenShift that feature to be able to install from scratch as disconnected. You just mirror everything, all your images, your release payloads, so local registry, tell OpenShift this is where you should do it, set up all these mirroring rules, and everything will be seamlessly installed for you. And that's a pretty big feature we also added in OpenShift, so. Yeah, again, that work is for running disconnected clusters. All right, so getting there. Sorry, that was misplaced. This is first. Oh, we just wanted to quickly remind you to run your containers as non-root whenever possible and enable SC Linux, because almost every CVE that's come out with containers has been a file system exploit, and those can be completely prevented if you are running with SC Linux and running as non-root. For example, this one came out in the past year. It was a big RunC CVE, and it affected all pretty much every container image everywhere. And the exploit was that a process could escape and overwrite the RunC binary and then make it do whatever it wanted to do on the host. So it was a really bad one, but fortunately at Red Hat, we were pretty much okay because, A, we don't run random images off the internet. We run as non-root. That's the default in OpenShift. That's why we don't allow containers to run as root in OpenShift. And set in force one, that because the RunC runs with container runtime exec t, the container processes are container t. Only container t types can write to files labeled container file t. So container file t does not equal container runtime exec t. So SC Linux blocks being able to write the RunC file. Yeah, so SC Linux was really the only thing that was blocking it. Running without root was definitely helping, but SC Linux. So remember, early when I showed you the fastest build process by disabling SC Linux, this is what can happen if you do that. All right. I interrupt folks, but we are running really close on time, just if you wanted to... Can we, do we have just one more minute? Can we do the two minutes more? Yeah, we're like two minutes away from schedule done. So if you wanted to open for questions, I just wanted to let you know. Okay. All right, so we have a pretty quick Podman remote demo from our intern here. And we can take any questions out in the hall. We can take questions in the hall after. We really want to hear to demo with this. So, hi, I'm Ashley. I'm an intern on the Container Runtimes team. And so I've been working on the Podman remote, basically packaging it and making it smooth. So it's easy to install and run. So right now I don't have Podman installed, but I can just do a, if anybody's familiar with Max, you know, Homebrew is like the default package manager that everybody use. So I can do a brute cast install Podman, and it will install Podman really quickly. And then right now I have my connection to my host, which is my Linux machine sitting right there. And that's the IP address. And just, and then so now I can just pretend Podman is running on my local machine. And then it should show the pods that I have that are on my Linux machine. And it just kind of looks like it's running on my Mac, but it's actually running on my Linux machine. And that's the remote aspect in play for Podman. And it's done using Varlink Bridge and using, so I have a Varlink socket running on my Linux machine. And SSH is into, it communicates through SSH. So yeah. Awesome.