 So where are all the actions? As people continue to wander in, since I only have 35 minutes into the 50 minute talk or an hour long talk, I'm gonna have to get started. My name is Dan Walsh. I lead the container team at Red Hat. Today we're gonna be talking about new container technologies that we've been developing over the last year, year and a half. The way I'd like to stop this talk is first of all, anybody who wants to see Scots talk yesterday about container technology, a bunch of you. So this assured is a fairly popular shirt at Red Hat. And a lot of times we talk about containers and the way I like to describe containers is they're just simply processes on a Linux system. And if you, one way to describe containers is to say that they're processes that are controlled by three things. One of them is C groups, so resource constraints, basically taking a group of processes, putting things like memory, CPU utilization and trying to control how much they use so they don't affect other groups of processes on the system. The second thing to think about when you talk about containers is security constraints. So basically I wanna make sure that this group of processes doesn't interfere with this group of process. So there's no escalations, things like that. And then the third thing you think about with containers is stuff called namespaces. So namespaces gives you that virtualization feel. So it's sort of, there's a pid namespace. As soon as a process chooses to join a pid namespace it loses view of all the other processes on the system. Similar amount namespaces, join a mountain namespace. Everything you mount from then on is not seen by your parent. So your amount table starts to diverge from your parent's mount table is. So those are the three things that basically make up containers. So C groups, some kind of security constraints and then namespaces. If you boot up a modern Linux system, a REL, like a REL 7 system or a newer, you would see that PID 1 inside of a system D that boots up the system. If you went and looked at it you would see that you could cat out PROC 1 slash C groups and you would see that PID 1 is inside of C groups. That's C groups associated with it. If you cat it out, if you went to PROC 1 and you could see that system D is running with SE Linux constraints. It has users associated with it. If you cat out PROC 1 slash SATS you'd see capabilities associated with it. Lastly, if you went to PROC 1 slash NS you would see the namespaces associated with PID 1. So when you boot up a Linux system, everything in the Linux system is in a C group, has security constraints and has namespaces. So by the definition of those things being required for a container, everything on a Linux system is a container and that's why the shirt says Linux is containers and on the backs it says containers of Linux. So really the whole Linux system is built to build these containers. Now container run times are all about basically modifying those constraints. So further locking down what a PROC is able to do on the system. So lastly when people ask me can I do that in a container? Can I run this in a container? I always say can you run it on Linux? If the answer is yes then you can run it in a container. Okay, so we're gonna talk about the next generation but let's start by doing this. Please read out loud all text in red. This is excellent. The container registry is rather than the author registry. This is rather than the author registry. Excellent. Okay, so since we're trying to do this talk without using the docker word, we have to put out the swidge for those that are maybe native to the US. A swidge hour is what in American households when you're a child would say a square they'd have to put a quarter or some amount of money into a swidge hour. So if I say the D word during this I will have to put money in there. But the real point to this is to point out that so that the D word is sort of dominated the conversation and it's really just one form of doing containers. And I believe in a lot of ways because of that we've been sort of hindered, right? To five years ago, and I'll say it now, Docker came along and they sort of revolutioned. They got all this container stuff to take off and all of a sudden it became the only way of doing it. And just, but containers are nothing more than process on a Linux system. And because of that we've had some hindrances in my opinion. And what I want to look at is new tools to be able to do this container technology and be able to expand it. So when I look at it, what do you need to do to run a container? So what does it mean that I want to run a container on a system? Break it down, what I'm trying to do here is break it down into core components. So when I want to run a container on a system first of all I have to identify what a container is. Okay, and that's really sort of, what is a container? What is a container image? So most people when they refer to it are actually referring to container images, right? I want to pull something from Docker I.O. That's a site, I don't have to pay for that one. So they want to pull some kind of application down. And what happened, the real start of the Docker Revolution was that they standardized on this concept of an image. And what an image was is a tar ball in some JSON file. So what you do is it created what's called a RutaFest. A RutaFest is just a directory that looks like root on a Linux system. And then I created a JSON file that basically describes what's in the RutaFest. Then I tie the thing up together. So I use the tar tape archive tool in Linux and I tie those up. Now I can have what's called layered images which is basically I'm gonna install something on top of that RutaFest. So I tie up the first one and I install something new. Now I tie up the difference from the original code to the new one in the tar ball and I created another JSON file that modifies the original JSON file and I tie that up and that's a layered image. So there's just nothing more than tar balls and JSON files and the next thing you do is you take these tar balls and you put them out on a website. And in this case we call that website a container registry. So a container registry and then we build a protocol to pull those images back and forth. So when we came out with these tar balls originally there was no standard. There's no standard for it and everybody was just using the de facto standard basically what Docker did in the beginning. And so what they did in the beginning everybody was fine with that for a little while and then all of a sudden CoreOS came along. And CoreOS had a different technology they had a technology called Rocket. And what they wanted to do with Rocket is they wanted to be able to support out their own application container images and what they decided to do was they came out with they wanted to standardize on it though. They didn't want one company to be able to control what it is. And if you think about the problems of controlling what the data, the data images, just think of Microsoft. So Microsoft came out with dot doc format back in the 1990s and what Microsoft would do is every single release of their operating system they would basically change dot doc format. So all of a sudden people couldn't send documents around unless you bought the latest windows or the latest office products, right? So if you had Windows 95 and all of a sudden Windows 2000 comes out, all of a sudden people would build documents on Windows 2000 and you wouldn't be able to review them on Windows 95. And of course Microsoft also was able to get like LibreOffice and OpenOffice and all these other tools weren't able to interoperate. So what we wanted to do is get a standard a standard application in CoreOS that we have to have a standard on what this image format was. And so they came out with the App-C spec. Now App-C spec was different than what was the Docker image. So there was a problem with that and I prepaid for the next one. So all of a sudden the big industry companies like Red Hat and Microsoft and Google and IBM basically said this is gonna be bad. What's gonna happen here is all of a sudden there's gonna be multiple different specifications. So if you wanna build applications that are gonna ship in the future you're gonna have to have an App-C version. You're gonna have to have a Docker version. That was my second one. And so we really didn't want everybody having to ship different type of container images. So everybody got together and said we're gonna form a standard and that was OCI. So OCI stands for Open Container Initiative. It was a standards body originated by Red Hat, Docker, Rink, don't have to pay for that. It's the company and Microsoft, IBM, Google and maybe our CoreOS and Red Hat and maybe a couple others. But anyways they got together and they standard and as of last December they came out with the OCI image bundle format. This basically defined what goes in an image. So a lot of times we talk about images for now on and say call it on the D image, call it an OCI image. It's a standardized image form. It's based on the original D image but everybody agreed to do that. So CoreOS actually triggered this long before they were acquired by Red Hat. So the next thing you need to do, oh segue, the next thing you need to do is basically pull down an image. So this is one of the tools I'm introducing today. It's been around for a couple of years, kind of weird I'm introducing now, it's called Scopio. How many people have played with Scopio? Okay, good groupie. Scopio was introduced a few years ago and the whole idea is originally what we wanted to do is basically go out to a registry, contain a registry and look at that JSON file associated with the registry, with the image. And if you think about some of these images, I've seen JBoss images that are like hundreds of megabyte size, size getting up to the gigabyte size of these images. So the only way right now with the D tool to look at one of these JSON files associated with an image is actually to pull the image. So do you wanna pull a couple hundred megabytes just to look at this JSON file that describes the image and basically say, oh that's the wrong image, now you gotta throw it away. So what we wanted was basically to be able to do a D inspect remote, dash dash remote. We did a pull request to upstream and they said, no we don't wanna clutter up the CLI, what we wanna do is we don't wanna do that but he said it's just simple, it's just a web service, just do web protocols and you can pull down the JSON file. Build your own tool to do it. So we built a tool for that called Scopio. So Scopio which means in Greek remote viewing was a tool to basically look at a remote site and just pull down that JSON file associated with the container image. So the guy that did this on my team, Antonio Madaca, actually decided to do go further so originally he just did inspecting images to pull down that but then he started to say, well I can build the entire container image protocol, the ability to pull these images back and forth between registries and he basically built Scopio into a tool that could move images around. Now Scopio's become really cool because it can actually transition from different formats so you can actually copy down an OCI format stored inside of a, inside of a Docker daemon. You can actually pull it to local directory, she can translate from the original image format to the new image format but the really cool thing is you can actually move images from one container storage to another or one container registry to another. So a lot of people now are using Scopio to actually move images around their environment and we're getting a lot of uptake in this. So we were working with CoreOS to try to get CoreOS to embed Scopio into Rocket and they said they don't want to embed a tool, a CLI tool into Rocket. What they wanted to do is basically just use the library that Scopio was using. So that library became Container's Image. So GitHub Container's Image is now a library for moving these OCI images and old fashioned images back and forth around the environment. You're moving between registries and you don't need to have any root-based tools so you can basically sit there as a user and say copy from say my internet-based container registry and copy into my internal container registry or copy the files locally. So we became a mechanism for moving that image from the registry to the host. The next thing we needed to do is basically take that image and basically explode it on disk. In order to run an application and container we have to have that root FS re-established. So we take down those one or more layers and reassemble them. The way you do that in Linux is what's with things called copy on write file systems. You might have heard of Ovalet, DeviceMapper, ButterFS, there's a whole bunch of them. So we basically took a lot of the tooling that we had worked with the upstream and built it into a little tiny library called Container Storage. So the ability to explode images onto a copy on write file system. And the last thing you need to do when you run a container, when you run a container is you actually have to basically, what does it mean to run the container? And luckily OCI is standardized on that. So there's a standard mechanism for running a container and that was also specified last year, at the beginning of, yeah, last year as the OCI runtime specification. So an OCI runtime specification says that I pulled down the image and that image had that JSON file that tells me how to run the container. Well, I also have input from the user and I might have input from whatever tool is putting this all together. And I basically want to take those three inputs and combine them together. So user might come in and say, I want to run in privilege mode or I want to run without this capability or I want to volume mount in this stuff. So we need to take the user input, the application that's setting it all up, how the container word, I'm going to call container engine. And then the last step is actually to take the stuff from the image and it munges all that together and basically writes out another JSON file. So that JSON file becomes the OCI configuration and it's part of the runtime. The OCI runtime spec defines what's in that JSON file as well as what's in the root of FES. So it says you put a root of FES on the system, you put this JSON file between it and now I launch an executable that understands the JSON and configures the system. Darker Inc basically gave the first tool to do that called Run C. So Run C was the first implementation or so the de facto implementation of the OCI runtime specification. Just about every tool that runs containers now in the universe uses Run C to create the container. Okay, so Run C. So this is the steps that you needed to do to run a container on your box. Right, everybody agree with that? Anything missing? Okay, so we don't need a big fat container demon to do all those steps. And I'm a big pusher against the big fat container demon because the big fat container demon, here we are five years into containers and there's only one way to run containers. Everybody knows it. If I ask you how do you pull an image, you tell me the deep pull. If I ask you how to push it, you say how do you build it, de-build it. And everything goes through this one image. The problem with the big fat container that is the biggest problem with it is we get least common denominator of security. So needing to build a container is much different than needing to run in production. I need a lot more privileges to be able to write to the container image than I do to basically when I wanna run it, say under Kubernetes. So what we wanna do is basically take these pieces apart and reassemble them and redo different types of tools for running containers, each one with the least privilege. Now later on there's gonna be a talk that talks about some of the security features that we've been able to do by breaking apart the big fat container demon. I work for OpenShift. So everything that I do tends to be either for open source or I am instructed to do it for OpenShift. So when I look at what OpenShift needs to do to run containers, OpenShift is Red Hat's Kubernetes, our enterprise version of Kubernetes. Really what OpenShift is, plus plus. We have other features and other things we've added onto Kubernetes. But basically if you wanna get, if you come to Red Hat and you wanna buy Kubernetes from us, we will sell you OpenShift. So what does OpenShift and Kubernetes need to run a container? They need those first four things, but they need CRI. So there's a little story here. CoreOS again. CoreOS came along and they wanted the original version of Kubernetes embedded Docker all over the place inside of the code. CoreOS came along and they said we wanna support Rocket inside of Kubernetes. So they wrote huge patch sets and basically sent them upstream to Kubernetes that basically said if Def Rocket do it this way else do it the old way. And the Kubernetes developers at the time of the upstream Kubernetes said wait a minute, we can't do this. Because if we do this for Rocket then all of a sudden a garden or some other container engine's gonna come along and say we want you to support our container runtime as well. So what Kubernetes did is they wanted to turn it on its head and they basically said you guys implement a small daemon and we will talk to it and we will talk to that thing called VSCRI. So container runtime interface. So Kubernetes defined an interface that it will talk to container engines with and then if the container engine implements it Kubernetes will very happily do that. Next thing that Kubernetes needs to do when it talks to a container engine is it wants to tell the CRI that it needs well it's gonna tell the CRI it needs a container image. CRI needs to pull the image from the container registry needs to store it on top of a copy on a right file system and finally needs to execute an OCI runtime. Anything look familiar from the first part? So we have all these tools. Another one of my members of my teams when this happened basically said you know we could take our standard building block tools here and build our own CRI and that thing was called CRIO. So CRIO, so the CRI stands for container runtime interface for Kubernetes and the O stands for open containers or OCI open container images. So we developed a small lightweight daemon that basically just implements what's needed for Kubernetes to run containers in the environment and we called it CRIO. So CRIO is an OCI based, I already said that. So scope is totally tied to Kubernetes to the CRI only supported uses contained as for Kubernetes, nothing more, nothing less. Let me beat this to death. CRIO loves Kubernetes, Kubernetes is it. CRIO is a, you know she's very loyal to her man she's never gonna go anywhere. She might get Mesosphere comes in and says you know that's being cute around her and stuff like that but she says no frigging way. And we got here, definitely not, not even in the ballpark. This no way and definitely not, okay. CRIO is only all she cares about is Kubernetes, okay. It's just Kubernetes. So overview of additional components. So there are additional things we needed to be able to do CRIO and we'll talk a little bit about those. So one of the things we needed to do was basically translate the input from Kubernetes. Kubernetes has its own specification of what it wants to do to run a container. But we have to translate that specification to OCI runtime specification. So there happened to be a tool inside of OCI called OCI runtime tools actually written by one of my guys. But basically it can take input from users, a library that'll take input from users and generate an OCI runtime specification. So we use that inside of CRIO. The next thing we needed to use is this thing. Again, CoreOS comes along. We needed a way to configure networks. So networks is kind of a strange part of this whole container world. And now we needed networks to, you know, we want to allow different virtual private network tooling to come along and build and be able to plug into the container environment. There's lots and lots of companies building their own. So either hardware-based or software-based container networking. I mean, so CoreOS is defined a standard called CNI, which is container networking interface, to use to allow other people to plug in. And so they've been used with Flannel, Weave, Open Daylight, Open SDN. I think OpenShift has their own version. So lots and lots of people are building container networking interface. Lastly, to run containers, we need a way to monitor the container. So when I launch a container on the system using an OCI runtime, it just goes out and configures the kernel, you know, those C groups and security settings and namespaces, launches the process, and then goes away. So at that point, there's nobody watching the container. There's nobody sitting out there saying, did the container exit, right, or trapping it? And so we needed a tool to basically watch the container. And basically that's called conmon. We wrote it in C because we wanted it to be as lightweight as possible. And it basically monitors, it takes care of logging, what's the output. So when you run containers, you usually watch what's going to stand it out in standard era. It handles the TTY. It's service, serving attached clients. And it detects who basically figures out if the container died and then writes the status to a file. So now any container engine that comes up can actually go to conmon and basically figure out what happened, or if conmon will exit with the container, but it'll record the data that happened. So the pod architecture. When you're running Kubernetes in your environment, Kubernetes runs pods. It doesn't run containers. Now pods are basically one or more containers running together. And the pod is also this idea of what's called an infra container or a pods container. And what happens when you launch a pod under Kubernetes is it launches this little tiny container program that basically goes to sleep. It just starts up and then attaches all those namespaces to it. You have to have a process in the original namespace and then it will add containers to it. So if you looked at under cryo what happens when you launch a pod, we launch the infra container. It has one conmon listening to that. And then one or more containers gets launched. So basically this is what the whole infrastructure of pod infrastructure under cryo. So we talked earlier about how much cryo loves Kubernetes. And the way we're trying to prove that is basically we have the biggest test suites. Every test suite we can find, we run before anything gets merged into cryo. So we don't want cryo to ever break. No new features ever break Kubernetes. So right now we're running, I don't know, it's probably much more than 500. But this nine full test suites, to get a pull request into cryo at this point is pretty difficult. You have to jump through hoops. You have to make sure that everything is possible state. No PRs emerge without everything passing. Cryo came out, was fully supported as of last December. My engineers wanted to call it 1.0. So we released it back in December. I hated the fact that we called it 1.0. So the next release, we called it 1.9, which works with Kubernetes.1.9. Then we released 1.10, which works with Kubernetes.1.10. Anybody has the guess what works with 1.11? Yeah, OK. So 1.11 works with Kubernetes.1.11. We are stocking the hell out of Kubernetes, OK? The goal, right now, well, I'll talk about that in a minute, but basically the goal for OpenShift 4.0 is that we'll support cryo by default. Right now we support both cryo and Docker under the covers. But the goal is at 4.0 to support cryo by default. Cryo is now running most, a lot of OpenShift online. So if you go on OpenShift online, you're using cryo. If you go to Microsoft and you want to launch a Cata container, you're using cryo. So cryo is actually getting out there. But in a lot of ways, I always tell people, I want cryo to be something you ignore, right? The real goal here is to make running containers in production boring, OK? I often ask people, they say, all right, you use this in the back end, you use this in the back end. And I ask them, what file system do you use? I don't know what file system I have on my laptop. Is it EXT4? Is it XFS? I don't know, and I don't care. The only time I care is when something breaks. And so our goal here is to make this thing just blend into the background. It's just a feature underneath Kubernetes. So what else does OpenShift need to do to run containers after it uses Kubernetes? Well, it needs the ability to build images. OpenShift has this concept called source to image, where a user just checks something into Git, does a push, and all of a sudden the container poops out the back end of the OpenShift, right? So we needed that container image to come out of the end. So we needed a way to support that for OpenShift. And we needed the ability to push these things to container registries. So this guy right down here, as an Allen Dibyke, is working with me last year at DevConf Check. And we're sitting there together, and he's in charge of a container's image. And I always kept on saying to him that I need a tool for building containers. I wanted to core your TILs for building containers. I said, you know, it's just a root of fs. I need to create a root of fs, tie it up, tie up some JSON file, put it together, and build it. And I said, I needed some copy and write. I said, you get that. You get container's image. Could we throw together something to do that? I told him that in the morning while we're at DevConf, and by that evening he did a five minute talk showing how he would build container images using container storage. And so he said, what do you want me to call it? I said, I don't care what you call it. Just call it builder. What difference does it make? And then he came out with this. And the last thing here, this is not the current image, but this image was the first image we put out of it. This is a Boston Terrier supposedly in a hot hat. As soon as we tweeted out that we had an icon for this, people came back and said, why do you have a dog with tidy whiteies on his head? So I still live it. It's much more of a hot hat nowadays, but I like to leave it just for that joke. OK, so in the coloring book, hopefully you guys picked up, if you don't come and get me afterwards, this is what builder is represented as, as a dog. And I think it kind of looks like Nalan, don't you? OK, so builder came along. And again, my idea was core utilities for containers. We wanted to have a simple interface for it. So we needed to be able to pull an image from a container registry to the host. And so we built builder from Fedora. So what this does is it goes out and uses that container image to go out to a container registry, pulls down the Fedora image off of the container registry to the local system, puts it on top of container storage, and then creates a builder container. Container is a way overused word in this world. But basically, it has all the data that's associated with a container. And the next step we need to do is we need to mount the container. I want a mount point. I want that RudaFest mounted on my system. I just want to be able to write to that RudaFest. So we build a mount. And that basically brings back a mount point. OK, now the segue. Anybody ever hear of this command? Anybody know what this command does? It copies content from a container image to the host. Or it copies stuff from the container, from the host, into a container image. Really cool, huh? Really cool. I saw that and I said, I'm going to steal that idea. So I decided to go off and build my own tool. And I called it copy. And I put it into core utilities on the system. And it really works really well. But once I saw that work really well, I decided to build another tool. So I built a tool called DNF. Sometimes you call it yum. I used to call it yum. I might call it yum again in the future. But basically with this tool, you can actually install content into a container RudaFest. So I just added a dash, dash, install root. And you can basically install Apache into a empty RudaFest and do it. But I said, that's cool. I'll invent another tool. I invented a tool called make. So with the tool make, I can actually do this thing called destor. I decided to come up with this concept of destor. And I could basically set it up to point to a RudaFest. So basically what I'm showing here is you can basically use anything on a Linux system to actually populate what's going to go into your container. So the next thing you need to do is populate that JSON associated with the container image. And we have a tool called build a config. And so you can put things like entry point, environmental variables, all this different stuff that you basically put into a container's image to identify what the container is. And then finally, we want to take that container image and actually have a container and create an image, right? Create an OCI image on the system. And so that's build a commit. And then of course, I want to be able to push it somewhere, push it to a container registry so we have build a push. So with this tooling, and by the way, all this stuff here, no big fact container deeming, right? I don't need a deeming to do any of this stuff so I can do it. Not only that, I'm showing it's running its root here. With the current builder, we can do it as a non-root. We can do all this stuff. Taking advantage of user namespace, we're able to do this all as non-root now. It's simultaneous, all right? You get to try it again. Everybody say the same thing. One, two, three. Stand, please. What about the Docker file? Glad you went. So builder also has to support Dockerfile, okay? Dockerfile has become the sort of de facto standard. I like to think of it as a really crappy version of bash, but of shell script. But basically it's become this de facto that everybody wants to support. So we actually had to support with builder support using Dockerfile. So we built a command called builder using builder, build using Dockerfile. And basically has the same syntax as you would expect for running builds on it. About a course we're engineers, so we're all lazy, so we actually have build a bud. So build a bud, and Anheuser-Busch is not involved in this decision. But basically, we can build container images using Dockerfiles. For bash build. Well, it's not called builderfile, but I decided to write this really nice scripting language and I called it bash. So after I wrote bash, I basically have lots and lots of tools out there to build container images. And the whole idea here is that what I really wanted with builder is to basically provide a library or low level command line tools that other people could build higher level container languages. So we want others to build it. We're looking at OpenShift, is looking to basically replace, right now the source image is actually injecting the Docker socket into containers to run to do builds. A lot of times I tell people that that is probably the most insecure thing you can possibly do. If you want to give people access to the, I tell you to just go in and set sudo to known root and turn off your logging. Because if you give a non-root user access to that socket, that's what you're doing. If I go and do evil things on a system as via the Docker socket, I can then destroy my container and there's no record of me ever doing anything on your system. So never give out that socket to a non-rootless user. So what we want to do with source to image is basically stop injecting that. Lots and lots of people are out running container builders inside of Kubernetes. And what they're doing is they're volume mounting in that socket, okay, which is equivalent of giving them root on any host that are doing it. So we want to be able to do builder inside of source to image and stop injecting the socket. Ansible containers is also looking at potentially using builder to replace and basically using Ansible as your sort of Ansible playbooks for defining what's in the container image. So what else does OpenShift need to do? We need the ability to diagnose problems. We need people to be able to play in this environment. So we decided to create this new tool and we called it Podman. So Podman is part of the libpod effort. So we wanted to basically build a pod manager or a container managing tool. And we wanted to, basically this tool is just a CLI command line tool that can be used for managing container images. And we based it on top of what everybody knows, which is a Docker CLI. So Podman is now out. We're actually releasing Podman on a weekly basis. We've been doing it for probably the last six months. Just kidding, Podman 8.3. So we release it, age is the month and the third week. So at the end of the year, we're gonna be in trouble. So we have to have 1.0 by the end of the year because we can't keep our naming system going. But basically, you wanna list the containers on the system. If you wanna run a container on the system. If you wanna exec into an existing container. If you want to list the images out in the container. Basically we've tried to copy everything in that CLI possible that we care about. Obviously we're not doing swam with this command, but we've had most of the commands are all done and lots and lots of people. And there was a great tweet that came out back about, now I guess it's back in May, and I love this tweet. He says, I completely forgot that two months ago I set up an alias of Docker equals Podman and it has been a dream. So he's been running for two months at this point without using with Podman. And of course that's a several month old one. So the next question down comes down and says only downside, there's no book, I'll talk about that in a second. Next one's down, Joe Thompson replies and says, so who remind, how did you figure out that you were running Docker, I mean Podman instead of Docker. And he said I executed Docker help and it came out with Podman help. I think I owe about three quarters. So what I'd like you to do right now is go home, try this out, try out Podman. It's available on Fedora, Rel, Centos, Ubuntu, and it's fully supported on OpenSuzi as well. So it's basically gone out. We have lots and lots of contributors to it. And guess what? No big fact container demon, okay? It works like a fork in exactly. It works sort of exactly what you expect. Not a client server operation, but Podman is really, really cool and does almost everything you can. So we talked a lot about containers. There's handed out the coloring book before. And I think I'm just about to run out of space. So we have two other talks this afternoon. Nalan's gonna be giving a talk and I'm sure going back and attacking me. So I'm gonna give a deep dive into Builder and then Herbashi and Sally O'Malley are gonna be talking about all the differences. I said that there's lots of security stuff that we're able to do by breaking apart containers. So they're gonna be talking about that later on this afternoon. So look for those talks. You can take the photo of this and the presentation be there. I can only answer one question, I guess, yes. Is there any tool currently that can update a tag on a container or remote registry? Any tool, actually someone asked for that. The answer is that has to be built into the container protocol. Container basically the protocol that talks between the client and the server. And Vincent's raising his hand back there because he's gonna point out that they're working on a standard now to define that. So is that what you're gonna tell me, Vincent? Yeah, you can drop a coin in. The Docker Registry API, not the Docker Registry code base but the Docker Registry API has now been donated to the OCI, the Open Containers Initiative as the distribution spec. It is the API that would enable a feature like that but it's not really up to the client tools right now. They would have to do some shenanigans, like fetch the image and then retag it and repush it. So that would be the place to look for it. OCI, Open Container slash distribution spec. So we actually, we had a big bug report that someone asking for that in Scopio but we have to get it into, we needed to get it into Quay and Artifactory and Docker IO and so we really need that to be a standard how you interact with the container registries to be able to do something like that. Anybody else? Everybody loves this idea and they're all aliasing it on their machines right now, excellent. All right, anybody wanna talk to me? I'll be around and thanks for coming.