 All right, well, Dan, I think you have everything under control. So I can hop on at the end to ask questions or if you know how to ask questions yourself, that's fine too. Yep. OK, so this talk is talking about speeding up and securing container image builds using Builda. And really it's going to cover some sort of advanced features that most people don't know about and some really cool stuff that's been added to Builda over time. So quickly, hopefully most people see a Builda conference. They probably know what it is or at least hopefully know a little bit. But just to level set, I'll talk a little bit about what Builda is. You know, it's basically a tool. We have a coloring book out. And this is what the coloring book character looks like. The logo is a Boston Terrier. And of course, it's making fun of my accent in the way I say Builda, to build container images. But the real goal with Builda was to be a tool for co-utilities tool for building containers. When I mean that co-utilities is sort of a base, what I wanted is a easy way to build images, container images. Not always have to use something like a Docker file. But especially really if we look at what a container image is, it's just a tie ball with some, you know, it's a tied up directory in Linux. And then some JSON file that describes what's in the tie ball. And so what I want to do is just create a directory on disk, put some content in it, and then run a tool that would basically create a container image out of it. And that's really what Builda does. So to give you a little bit of syntax for Builda, you basically do a lot of the commands that you would normally see inside of a Docker file, but you can do them as a command line. So you can basically do Builda from Fedora. And what that will do is go out to Fedora, pull down the Fedora image off of a container registry to the host, store the content in the container storage and create what's called the Builda container. It's just basically an identifier and allocate some space and stuff for, but basically it's an identifier in the Builda database to tell you that you got a container. And what that container looks like is just an ID for, in this case, we just set it up to be Fedora working container. So it's based off the image name. If I pulled it again, I'd get dash one, dash two, dash three, it's pretty simple. And then the next step I want to do is I want to be able to mount the container. So I'm going to do a Builda mount of that container. And at that point, it hands me back a mount point. And I can go into that mount point and actually look at the contents inside of that directory. And that's basically the idea of what we're doing at Builda. And now the goal there is to put contents into that directory. And when I've given a longer version of this, I always talk about Docker copy. And Docker copy was a cool feature allows you to copy content from the host into a container image or copy content from a container image out to the host. And then I start to make fun of it. I built a similar tool for Builda called copy. And so just basically using standard copy, you can copy content from your host into a container image. But you can extend on to that. You can actually use DNF. You can yum install directly into the container. So yum has the ability to change the root directory when you want to install content. So you could actually use that or you could use make install to install content in the directory. So the real idea is that you can do stuff with bash and actually Builda has a concept of scratch. So you could build it from scratch, mount it up. And then also you have an empty directory and you just want to put your executable into that directory. And say it's a statically linked executable, package it up and you're done. It's really that simple to do stuff. Now there's other fields that we have in the Docker file that have to be set. And those are all done with the Builda config command. So things like the entry point, environmental variables, labels, basically all this, all the other special fields in a Docker file we handle. There is a Builda run command which allows you to run a container on top of the image. So similar again, it matches up with the run command inside of the Docker file. Finally, once you're happy with the way your image is started to look or your container images, you can actually commit it at that point to a standard image. And that image can be an OCI image or a standard Docker image. And then you can push it out to a container registry. And once it's out of the container registry, you can use any container engine to build a run and you can run it inside a Docker, inside a cryo podman. You can use it inside Builda again to pull it down and you can use our container D or any of the tools that support the OCI or the standard Docker image format. And usually when I give a presentation with a lot of people, I have them chant out at me anything that's in red. So at this point, you guys are all saying, Dan, wait, what about the Docker file? And Builda actually supports, fully supports Docker file and allows you to build using Docker file or the way we like to do is we have, you know, Builda Bud. And so it has full support for Docker file has, you know, everything you can do in a Docker file, you can run through Builda to be able to do. So anyways, then finally, I'd have people yell out to me to build, to have a scripting language, perhaps Builda file. And I say, yes, I developed a brand new one called Bash. The bottom line is here is that, you know, we wanted to build a low level tool that made it easier for people to embed the concept of, you know, building container images into additional tools. And then we based it on, you know, as simple as possible, it's Bash. And other things that have happened is to Builda libraries and, you know, we're not really building Builda as a library, but in Go world, you know, a lot of people vendor it into other code. And so the Builda capabilities are taken, this code has been taken and embedded into OpenShift for doing sourced images inside of, so anytime you're doing OpenShift 4 and doing container builds, they're actually using Builda inside of these, you know, their environment. Obviously, Potman Builds is actually pulling in the Builda code to be able to build, you know, so this is more for Docker files. There's Ansible Container Support and other people are looking and embedding Builda all over the place to be able to build containers in it. So another thing I wanna talk about quickly is the concept, you know, this talk is called improving speed of Builda, but also improving the security of container builds. So most people when they build container images are either doing it manually or they're doing it inside of a CI CD system or even inside of something like Kubernetes. And what almost everybody's doing is they're embedding the Docker socket so that, you know, it's the only way you know to build container images to use Docker. You have to embed the Docker socket into, you know, to be able to do a Docker build of a Docker file. And the problem with this is, you know, from a security point of view, this is a very bad idea. And I wrote an article back in 2015 that where I basically explained to people that access to the Docker socket is the, basically, I describe it as the most insecure thing you can do on a Linux box. It's worse than sues, giving someone the root password of a system or giving them sudo without, you know, password access. Because in Docker, I'm able to actually go and run containers and I can run privilege containers with the operating system mounted into them. And I could do all sorts of havoc on your machine and then I can remove the container and all the logs when I'm done with fairly simple Docker commands. And, you know, you'll have no idea that it was Dan Walsh that you'll launch the container that came in and totally screwed up your machine. At least if I go through suosudo, then you'll have a record that it became root on that machine at a certain point. So that's why, you know, access to the Docker socket is very bad. And yet, you know, people want to build container images inside of a CICD system or inside of, you know, something like a Kubernetes cluster. You know, they have to give access. So with Builder, a lot of, you know, really what we're pushing is the idea that you could build images, either rootless, or we have full support for rootless builds. And we have support for lockdown builds inside of containers. So imagine launching a container that actually builds a container image. And so you could use a tool like Podman to be able to do that. But a lot of people have stumbled, you know, how do you do that, you know, what do I have to do? So we actually went out and traded a whole bunch of Builder images at Quay.io that allow you to, basically, you know, you can pull down these Builder images and we keep them up to date with the current, you know, versions of Builder and you can actually use them inside of your CICD systems to, you know, if you want to use Builder as an image Builder. There's three versions of them. We have what's the stable version, which was based off of stable Fedora versions. Then we have the upstream version that's based off the master branch inside of our GitHub repository. And then we have a testing branch. And the testing branch is, you know, it's basically because Fedora has both, you know, a release branch and a testing branch. You often the stable branch and the testing branch are the same. But basically we keep these images up to date with whatever the latest version of Fedora and whatever the latest version of Builder is available. So there was a couple of things that we did inside the Dockerfile that we used to build the Builder container. So the Builder images here. So the first, I'm just going to take you and go out to the GitHub for Builder and actually take a look at these Dockerfiles. But I want to show you and explain what's going on when we build these images. So the first thing that goes on is obviously we're just pulling in the latest of Fedora, the latest version of Fedora. And then then we're installing a fuse overlay into the container. And for size reasons, we want to exclude container SC Linux, which will pull in all of SC Linux. So we install Builder and fuse overlay. Fuse overlay is our method mechanism for mounting an overlay file system without being root. So the Builder container can run as root or can run inside of a username, space as non-root. So that's the first step. The next step after we get the software installed is we're actually going to be, we're going to edit the storage.conf file. Storage.conf file basically describes how the storage driver is going to be used inside of your environment. So storage.conf, most people never edit it, but storage.conf has some really cool features. And one we're going to be talking a lot about in the next section is the idea of additional images. And I'll get, you know, basically the idea of an additional image is right now almost, you know, anybody that's used Docker or Podman has used one image database, one image store, where you pull down your images and usually that's stored in BioLib containers. And, you know, that's where you do most of your work. What we wanted to do is basically allow you to have additional stores that, you know, basically copies of the original BioLib containers directories into a different location. And we envision that you could share these with your network storage and things like that. But these are really just read-only stores to be shared with containers. And I'm going to show you some of the power that this allows us to get. So basically we're just setting up, we're doing two things with the said command. We're turning on the mount program to tell it to use views overlay. And then we're enabling additional images into the system. And I picked out the BioLib shared directory as being the directory where it's going to, that builder will look for additional stores. So to look in its, you know, current, its main store and as well as the BioLib shared. The next thing we're going to do is we're going to create a couple of files. Since we didn't, we not only have to create the BioLib shared directory, we just specified above but we have to create a couple of lock files that builder will blow up. Container storage requires these lock files to exist. So that's all that those lines are doing. And finally, we're just setting a couple of flags in the environment to tell builder to run in, use the namespace mode, you know, without having to necessarily be rude. So that's basically what a darker file that we use for build, build the containers. And they come with that pre-installed environment ready to go. So now I want to step back and, you know, originally we wanted to talk about, you know, we're talking about building container images whether we want to build them securely or fast. So there's always a battle between security versus speed when you want to, you know, do almost anything on a Linux system, you know, there's certain things, security features that might slow you down a bit. And then, you know, a lot of people want to turn off all the security features in order to build speed. So speeds, you know, speed that a process can run versus the amount of security you can wrap the processes with. And when we build container images, we face the same trade-offs. And we designed, the goal was to, we designed build a image, build that to the builder image to allow people to experiment and basically make their own decisions about where they want the security versus the, you know, the speed barrier. So let's look at different ways of building container images. And so I'm going to be down below is actually a darker file that we're going to be using to sort of demonstrate, you know, speed in building container images. And this is a fairly common format that people use when they build, you know, they put into darker files. A lot of commands that might have a couple of run commands in different spots that is installing software. And then often at the end of the software line, you'll see, you know, this is DNF Clean All. And the goal of these DNF Clean Alls is actually to get rid of any cache or any type of stuff, additional storage that you know, one side of your container. And so anyways, that's, so we're just going to demonstrate and show, you know, how much speed this costs you. So what we're going to look at right now is the most secure way you want to run a container on a system. And that's totally locked down. And so we're going to run a container. You know, we're running the builder inside of a container here. So we're using Podman to run. We have a device here that we're adding to the container. So we want here is builder in order to use the fuse overlay file system inside of the container. It has to have a device step fuse. And that's not assigned by default. Here we have the, you know, actually builder image that we're going to use. And we're going to take a local Docker file in my home directory mounted into the container. Since we're locking down with SE Linux, we're going to relabel it to be able to be used in the container. And then the final step is actually just going to build the container image. So when we do this, when we have this type of description here, the container starts with an empty by live container. So there's no pre-installed image inside of the container. We don't know what the Docker file is going to have. So it actually has no content and by live containers. And this means that if there's a from line inside of the Docker file, all images have to be pulled to this container, right? So this is going to DNF database is going to also have to be run for each run. And this is going to make things much slower. But this is the most secure because it's totally locked down container image, although it is running as rude in this case, unless we take advantage of user namespace, I mean, it's totally isolated from the host. There's no information going into the container from the host in this type of environment. So what I'm going to do at this point is I'm going to start a demo. What this demo is going to do is it actually fire it up and it's starting to pull down an image. And I'm going to go back to my presentation because it takes so long to actually do the effort, right? So it's pulling down all the content inside of the Docker file and basically demonstrating it. So what we want to do here is, oh, let's see if it's finished. Oh, it's still going. This is just setting up the demo, sorry about that. The problem here is Fedora is updated since I last ran this. So anyways, let's move on to the next section while the demo does its processing. So the next thing we want to do is, so we went from the most secure, now we're going to go for the least secure. And what the least secure is running pretty much the same command, but in this time, I'm going to volume mount in violite containers into the container. So I have violite containers as the content is basically the container images on my host and I'm going to volume mount those into the containers. And what this does is it allows me to have the images pre-pulled. So say my Docker file is going to pull Fedora, while I'm already running Fedora, so I don't have to pull down the Fedora image when I run. But I have to disable SE Linux for this type of environment because SE Linux would prevent access to violite containers from the container if a container escaped. Oh, but what I get here is, I can do this the fastest because it can share container images with the host, so I don't have to pull them again and I can use them instantaneously inside the container. Let's... So here I am showing the slowest and it's pulling down the image to the container. I'm just doing a build a pull, I'm not actually running the full build a bud at this point. And you can see it's pulling, it's going out to container registry, pulling down an image. And that took about 19 seconds to be able to do. So now I'm going to show you what would happen with the second example, where I'm volume mounting in violite containers from the host. It's another, it's a brand new container image. And when I run this mode, it takes one second. So it went 18 times faster, mainly because the image or the UBI 8 image was already pulled down to the host. So the last example I'm going to show you is sort of the hybrid model where I can get a really, really shared fast environment but it's still a lot more secure than that. So the second example I use there would have allowed, if the container has access to violite container storage, so it could write to that directory and be able to cause issues on that. Let me, so the medium one or the Goldilocks one if you saw my talk yesterday is to be able to take that violite, it's the same violite container storage that I just mounted, but instead of mounting it in violite containers inside of the container, what I'm going to do is I'm going to mount it in violite shared, which is where additional stores have come in. So what's going to happen inside the container now is the container has its storage, which is in violite containers, but it's going to use additional stores. So it's got, it has a read-only directory that we had set up or we set it up to darker file to look at violite shared. And what's happening here is basically I'm taking the host container storage, the image storage, mounting it into the container and I mounted it and read only this time. And so if I run the container here, it takes one second. So basically it gives you the same performance as far as pulling the image because again the image is already stored instead of taking 18 seconds to pull it in or instead of allowing me to write from the container to the host, I can take the container's host directory and mount it into the container. And that's basically what, really a huge advantage and speed can be acquired by just taking the host storage and volume mounting it in into the container. So this is fast since it's using the container storage from the host, but it does not, since it doesn't have to pull the images to the container store. And it still needs to push the images to registry in the container engine will pull the image. But basically from a security point of view, it's very secure because it's totally locked down container image. Container is not able to write anywhere on the host. This does not use username space right now, although we've experimented with allowing it to also use additional stores inside of a username space. It's mostly isolated from host. There is some information leak into the container and that the container will know which images are being used on the host. But that information is usually, all those container images usually are available at registry, so that usually you're not gonna store much secret information in those directories. But those are the trade-offs. So these we talked, basically we've shown that through the use of additional stores, we're able to speed up building images because you don't have to pull the images. Really, in my opinion, we haven't taken advantage of additional stores as well as I would like. Additional stores are also available for Cryo or for Podman. But we potentially, if you're running hundreds of thousands of containers, a lot of places you're gonna be running them on many, many nodes. And we have everybody pulling these images to every node in the environment. Every time there's an update, you have to update hundreds of nodes. But the funny thing is, why are we pulling these huge images to every single node in the environment? Why are we taking care? We've been working with HPC, high-performance computing, requires huge images, many gigabytes in size. We're pulling those images around all the place. Why are we just using shared in network storage for these images? So when we designed additional stores, our goal was to allow us to take and set up a big farm of stores of images with all the content. And then instead of even pulling the image to the host at all, you just assumed you would be available. As soon as I updated an image ad, I'd say a container registry, if I shared all the storage via NFS, all the images via NFS, they would be instantaneously available to all of the engines. So build their Podman, Cryo, could get instant access to images without having to pull. Now there are potential shortcomings for using network storage for your images, such as network latency and hiccups. But you're already using shared storage, most likely. You're probably already using something like NFS, SSFS of Luster or iSCSI or S3 to share your volumes, the data that the containers are writing. So I don't see any reason why we wouldn't share, also use those same sharing mechanism for sharing the content. And then we'd be able to, as I said, instantly get our containers up and running without having to always pull down images. So the next part of this talk is, we've looked at additional stores, when I, why are builds slow? Well, the first one is obviously when it's pulling images, images can take a long time. The next one for anybody that's run DNF or YUM inside of, on a VM or inside of a container, there's always this huge slowdown. And it's, if you've ever run these commands, it can take about up to a minute before any content actually starts getting pulled down to the container. So that time when you say DNF, YUM install Apache, and it'll just sit there for what seems like forever. It's like 60 seconds doing nothing. And then you'll finally see it starting to move forward and pull down Apache. So what is going on there? Well, when you run DNF and YUM, they check to see whether their local cache is out of date. So there's a big database store on your host under ViLib containers, ViLib DNF and ViLib vCache DNF, that has all of this metadata about all the software that is available to be installed on your machine. And what happens when you run these commands after a while is they go out to a centralized YUM repository and find metadata and download it. And that metadata has things like all of the RPMs but it doesn't have just that, it also has all the paths. So you can actually do a YUM install of user bin Fubar and YUM is smart enough or DNF is smart enough to go out and look at its database and say, oh, the Fubar executable is installed via the XYZ RPM and then it'll go out and get the XYZ RPM. But anyways, all that data is stored in a huge XML database. The files and historically XML is very difficult to process. And what's happening is these files are huge and then DNF and YUM spend a lot of time processing these files as they download. And these is the second take 30 seconds. I've seen it even take up to a minute. So anytime you have an updated machines, you always see that. So imagine you're doing installs via Dockerfiles and say in a build file, this problem becomes usually more difficult. If you're doing it once a day inside of a VM or once a week inside of a VM, you don't mind that. But if you're running hundreds and thousands of builds inside of your containers, it gets worse. And we showed earlier the definition of the Dockerfile that I'm gonna be running in these tests. But one of the key factors when you're building images is to keep them as small as possible. And so that construct of doing a DNF, dash Y install ACDPD there, and then using the DNF command to clean all when it's done. So basically get rid of all the cache. That's gonna remove all the cache that was downloaded. So the first time you run DNF inside of this container image, it's gonna spend that half a minute to a minute downloading all that metadata. And then right after you install the one package you wanna install, you're gonna clean all and basically destroy all the metadata that you downloaded. And then later on inside of the command you might install another package. And guess what? You're gonna pay the price the second time for downloading all that content. And you'll spend another half a minute downloading that content. So this is the way people designed Dockerfile. So they hit this issue quite often and it just causes the time that it takes to build an image to be really, really poor. So what we decided to do is that we looked at this problem and said, how can we make this better? And what we wanna do is we wanna came up with a concept what's called an overlay mount. So all the container engines are heavily using the overlay file system. What the overlay file system allows you to do is take a lower level directory or file, group of files and mount them into the file system and then create what's called an upper directories. And what happens is anytime you read content in this overlay, it reads content from the lower but if you try to write any content it writes content to the upper. And so really what overlay is doing is it's merging the upper directory and the lower directory together and that's sort of what you are seeing in your mount point. So overlay allows us to sort of share read-only content from the host but allow you to write content onto the directory. So unlike I say a standard bind mount volume where you mount in that directory you either can write to it or you can't you can either modify it or you can't and an overlay version you can modify it but you're not modifying the original content you're modifying a different content. So the beauty of the overlay mount is that it's writable inside of the container but we can have it read only in the host so you can have the containers actually modifying actual content on the host. And one of the things we did with the overlay volume mounts is we actually decided to allow you to destroy content. We destroy content when the container exits. So anytime a container exits an overlay mount cleans up and I'm gonna show you why that's important when we get to our demonstrations. And think of it like overlay volume mounts is sort of being like a tempFS. So when we're gonna use overlays we're treating them like a tempFS in that you're any content that you run while the container is running gets written to the system it can be used from that point on but as soon as the container exits just like a tempFS that data disappears from the system. So when we're gonna use the overlay mount we're using a, you know, looks just like a standard volume mount and part man or Docker but we have a special colon O at the end that tells us to do it as an overlay mount. And we're gonna do, we're gonna be volume mounting in Vakash DNF into the container. And when I, what this allows me to do is take the cache from the host and mount it into the container. So I can basically pre create the cache on the host and as long as I'm using, you know in this case say we're running on a Fedora 32 machine I'm able to pull down the entire cache once a day have it in my host and then share that between all the containers. Violet in this case via cache DNF on the host is gonna be a read only lower layer for the container and then inside of the container there's gonna be a Vakash DNF which is where writeable content is gonna go. It's actually gonna go into a container private data store where the store is. And, you know, as I said, what we would recommend is that you keep your Vakash DNF on the host up to date. Now if the Vakash on the host is not up to date then builder will go back into the slow mode of DNF will go into the slow mode where I'll download the cache but in order to get the speed ups that we're looking for you gotta keep the host up to date. So here we have this is the slow mode for running containers. And as you can see here, it's going out and surrounding the first DNF command. And you can see that it's taking time. All this stuff that's going on right now is all involved in downloading the YAM cache, DNF cache. And you see it takes forever. We haven't seen anything about pulling down Apache. Now it goes into pause mode. So it's downloaded the cache onto the host but now it has to process it. So it's going through and processing all that XML data. Here it goes back out to the host and pulls down and figured out it needed additional data. It's pulling down that data to the host. And again, all we wanna get is one package or a few packages onto the system but we're spending all this time just downloading data to the host. And let's see, do we finally start to process? This is where I need the Jeopardy theme song playing. I guess I should have sped up the video at this point. Let me see, does anybody have any questions while we wait for this? Okay, so Richard Jones asks, what happens if DNF is running the host and modifies Vokash, DNF wall container and this use in the storage? That's an interesting question and it's really undefined. I would say don't do that. I would recommend that you don't run containers that you don't run your scripts while your builders are running. The problem is the overlay kernel, the overlay mount basically says if you modify the lower level directory while the upper level directory's while it's being used in an overlay then it's undefined what would happen. So I can't really tell you what's gonna happen other than there's a potential. As you see, it's actually I missed it. So it finally downloaded all of the packages and installed it in here. So this is all the packages. And you see that it happened fairly quick but guess what happened now? We did the clean all and we're back doing the exact same thing again. So because I had two different run commands inside of my image to pull it. And then I did the clean all it got rid of all the cache and now we're back pulling the container image again. So someone asked here, Michael Smith, if you're using ButterFest could you take a snapshot? Yeah, I mean, you could use he's basically describing you could use ButterFest to protect against the situation that Richard talked about. And then you would take the ButterFest snapshot and mount that into your containers and then allow systems on the host to modify sort of a different environment. So now we finally see that it's installing. So it went through twice to install the packages onto the host and finally we're committing the image. And just imagine going through and doing this many times a day and that took about nearly 200 seconds. So it took about four minutes, three and a half minutes to be able to install just those two simple packages. So now we're gonna do the command again but this time we're gonna do it with the overlay mounts. So here we have a up here I'm actually taking the hosts. What I've done is a pre-populated I actually am running off Fedora 33. So what you can do with DNF is you can pull down content for Fedora 32 when I put created a special directory where I downloaded it, which might also be a mechanism to fix the problem that Richard talked about where someone could modify the cache while you're running. But basically I'm just volume mounting that in into, but notice I'm mounting it into Podman here and I'm mounting it at read only. And then in Builder when I execute Builder inside of the container, this is where I'm using the overlay mount. So it's mounting by like cache DNF in my Builder container into my Podman container into my Builder container to do it. So that basically I'm doing two mounts and I'm actually doing the whole thing right away. And I'm not sure why it's pausing it right now, but this is one of those. When I demonstrated this in this earlier work perfectly, so this might be internet pickup or I'm not sure. But anyways, you get the idea and what should happen here, what should be happening right now is this should be going very, very fast and usually it can take drop it down from those three minutes down to about 18 seconds. But obviously something has gone wrong on my system. Oh, I will go back to the presentation and you guys will have to believe. Anyways, you see that even though it didn't demonstrate very well, you can see that it didn't pull down all the cache data and do anything with it. So it ran right away and I was installing the packages which seems rather quick. So it did start to work. Now we're gonna do the clean all again and again, I don't know what it's stalling for here, but it might be that my cache is slightly out of date and it's doing some processing on it. But anyways, we don't have to do that to huge downloads and stuff like that. So usually we'll see quite a bit of speed up and I'm gonna go back to the presentation. But actually, I guess that's the end of the presentation. And then demo, but if you think about this, right? Because we were able to take the volumes in and mount them in as read only into the container, the Viacache, again, we're doing this as securely as possible. There is potential information leak, but again, that's really shared content that people would expect to be able to read from the host into the containers and I can thoroughly control it. If you were doing this on rel, you could actually have volumes. You could use in pre-park to late rel seven, rel eight, Ubuntu, well, Ubuntu app is better than Yum in this category, but you could have multiple Fedora repositories. You could have OpenSuzi repositories. You could have all of those on the single host and just have, say, a cron job that would run once a day to download all these images. So I guess that's the end of my presentation at this point if we could open it up to anybody have any additional questions. Let's see if it finally finished. Okay, so whatever that pauses is causing, it took 167 seconds. So it would have taken a lot quicker, but I'm not sure what's going on that's causing the failure, but that's one of those demo things. I should prerecorded and then I could do the fake stuff, but anyways, any other questions? Okay, I guess at this point we could, here comes the niche. Yeah, I was just hopping on in case. Yeah, I think we're good to go then. So just as a reminder folks, we do have a breakout room available. I'll drop a link to that in chat right now. So if you want to continue the conversation, I'm sure that will be there for at least some time. Yeah, so I actually should have pointed out that all this technology now is being put into OpenShift. So the OpenShift builders are starting to take advantage of some of these features to be able to speed up, builds as much as possible. So really that's our goal is to, if you had huge farms and machines to get rid of these, everybody pulling the images repeatedly and everybody pulling down the YAM and DNF caches all the time. So I will go to the next room and. Before you hop off, Dan, we just got a question. So Michelle is asking if you have any idea when the Ansible containers integration is happening? So that's not something our team is working on, but there is Ansible containers work going on right now for Builder. I believe people, there's a thing called the Ansible Blender, which is available now. So there is the Ansible bindings to all this stuff available at this point. Okay, I will go to the networking room, I guess. Thanks, Dan. Yeah. Check the pictures that match.