 and anything he's got is a really nice demonstration. Once again, welcome. Right now we have Will Woods and Jen Giardino presenting Cockpit Composer, building OS images for any platform. All right, let's see you. Oh, hey, my microphone works, yeah. Hi, I'm Will Woods. Oh, and I'm a senior software engineer, apparently. I am. That's not a lie. Yeah, and I'm Jen Giardino, senior interaction designer on the UXD team at Red Hat. And we also have a guest speaker, Jacob. It's all about the hot dog. So welcome to our talk. Image builder is tooling to enable the creation of customized OS images and includes both a graphical UI and a CLI, and it takes contents like REL and Fedora, along with custom contents, third-party contents, and it enables you to create image files for a variety of deployment types. And currently, we are using existing tooling for our backend builder, and we'll talk more about that later in the presentation. The image types that we initially supported were live bootable images, standard Linux, virtual machine images, and raw disk images, and then in route eight, we introduced initial support for hybrid cloud image types, and we plan to add more cloud image types in future releases. The images you build with Image Builder are built based on these things we call blueprints. So blueprints are where you define what packages you want to include in your image, along with basic customizations, like post name and users with passwords or SSH keys. So I'm gonna take you through a demo of the current functionality that we have in the UI, and just wanted to let you know that, let me log out and start over so you can kind of see the full experience. But on my machine, I have a VM running RHEL eight, and on that machine I have the packages for cockpit and composer already installed. So I don't know if you're familiar with cockpit, we refer to that as the RHEL web console, and cockpit composer, we refer to that as Image Builder. So the web console is a way that I can access that machine using a web-based UI. So I'm going to log in to the web console. And because I have the packages for Image Builder installed, I also see Image Builder display in my left-hand navigation here. And when I navigate to the Image Builder, I first see a list of all the blueprints that I have access to. The blueprints that you see that start with example are blueprints that we provide out of the box when you install Image Builder. And then I can also create additional blueprints for the images that I want to create. For this demo, I'm going to modify the blueprint named example, HTTP server. And first I'm going to modify the packages in that blueprint. So on this page you can see we have two panels. The panel on the left displays all of the contents that I have access to based on the repositories that I have defined on my system. And then the panel on the right shows me what contents I will be including in the image files that I create based off of this blueprint. And this panel is divided into two separate tabs. So the first tab is Selected Components. These are all the items that I have explicitly selected to be included in this image file. And then the dependencies are the things that we automatically pull in based on those items that you selected. So the first modification I want to make to this blueprint is to remove a couple of packages. And so I don't know if you were paying attention to the numbers on these tabs, but if I roll back those changes, you can see that as I was changing the packages that I included that the number of dependencies also got updated because it's automatically resolving those dependencies as I change those contents. The easiest way to find packages that you're looking for on the UI is to filter the list. So I'm gonna filter by Node.js and I'm going to add this package to my blueprint. If I click the Add icon here, it will automatically pull in the latest version of that package. But I could also click on that list item and view the component details for that package to be a little more specific about which version of that package I wanna pull in. But for this, I'm just going to take the latest version of that package and add it to my blueprint. And I also wanna make sure that I have NPM included in this image. So I'm just going to filter the list of packages in my blueprint and you can see that I haven't explicitly selected it here, but it is showing up as a dependency. So when I pulled in Node.js, it automatically pulled NPM in for me. So as I've been making all of these changes, they've been captured here in this pinning changes model. So I can see these are all the changes that I've made during this current session. And when I'm ready to save them, I can go ahead and hit the commit button. And now those changes are saved in the blueprint. Before I create an image with these updates in my blueprint, I also wanna just go and check the customizations that I have. So in the slide that I shared, we support more customization settings than are currently available on the UI. So right now we just have the host name and the users. We're currently working on adding those other customizations into the UI. But here I can go ahead and add a host name and I can add another user. And I can either provide an SSH key or a password. For this, I'll just create a password and hope that I type it the same way each time. And so now I have the extra user in this blueprint. And when I create the image, it'll include those package changes and the customization settings that I included. Currently, these are the different image types that we support that we can create images for. For this, I'm just going to select a QMU image type and start that. And all of the images that I've created for this blueprint get listed under the images tab. So you can see that this image is the one that I just now added to the queue. These are other image files that I've previously created for this blueprint. So that's all we're gonna show for the demo. Next, we're gonna take you through, I guess first, what does ImageBuilder not do? So we create those image files. ImageBuilder does not deploy those images for you and ImageBuilder is not going to manage those deployments. So that's something that we're working with how ImageBuilder integrates with other products to help you with that complete workflow. So I just wanted to point that out. And then we wanted to talk about some of the things that we're currently working on for ImageBuilder. So including additional image types, I obviously didn't update my slide to remove Google, that's one that was there during the demo, but Hyper-V, IBM, Alibaba. And then also the features that we're currently working on is the ability to upload to the cloud. So taking those image files that are for Amazon and Azure and being able to upload them, but not deploy them. Just go ahead and get them there so that you can then take them further and do what you need to with them in that context. And then also some of the things that we're working on in terms of how we're building those images and then some of the current challenges we have with those configurations that people currently use with KickStarts that we're trying to figure out how we can incorporate into our build process. So Jacob, it has been working on the feature for uploading image files to the cloud, so he's gonna spend some time taking you through the work that he's been doing there. So as Jen just demonstrated, we are currently able to create an image and that will allow you to download the image and you can handle it yourself. However, we are currently working on allowing users to when they select an image type, they will also be given the option to upload that image to the cloud providers that the image supports. So in this instance, we can select that we're gonna upload it to Azure and alongside that, we will now provide the ability you'll enter all your credentials and like where you want the image to go and then we'll prepare it for you and we'll upload the image and it'll be ready to run on that cloud provider. We won't handle the first boot for you yet, but we will allow you, it will be ready to run on the provider and it'll be uploaded to where you set it under your user. After creating an upload, you'll then be able to view the uploads you've created for each image. And so for instance, if you have a satellite upload or like an Azure upload or an AWS upload for a particular image type, you'll see all those uploads that you've pushed for that image. Now, and I believe that code is, has that been merged? Not yet, it's about to go through the PR process and should be live soon. Okay, thank you. So here I'm gonna talk a little bit about OS build where we're sort of redoing the backend pieces of this because Anaconda is a wonderful, wonderful tool but it was designed to, well, Neil shaking his head does not think it's a wonderful tool apparently. I'm not gonna argue with you. It's very good at some of the things that it is supposed to do and it's the one thing that we know can make bootable images and do it correctly. We figure maybe there should be another tool because it really doesn't want to work the way we want to use it. It's designed for installing a image or installing a system. It wants to run on the system that it's setting up and it gets really confused and gnarly. So OS builds and this UI here is not, this is a mock up, but part of what we're trying to do there is make builds be reproducible. So when you do your initial image build, you get a manifest out essentially or something like that. You get a list of the exact package versions that were used in that build and if a week later you're like, hey, we did that QCOW image, it's great. Now I want an AMI of that. There can be some problems because of the way that, in Fedora, the way our repos work, some of the packages you used in that build might have gone away. So OS build is going to, or part of what it needs to do is handle that problem by caching all of the data to get used in the images and associating them with one another. So we have this concept of a compose which is basically it's one point in time of that blueprint and all of the things you built from it. So there's some challenges around how to make that safe and reproducible and reliable. So yeah, there's a project called OS build that we're doing that's basically a staged image builder. It's I think, yeah, it's a new build system for us images. It doesn't really change a lot of the image builder stuff that we have. It's just sort of this one new backend component. So it's not super disruptive to what we have but it's going to allow us to do a lot more reliable builds probably faster and faster, easier, more reliable, that sort of thing. I don't remember if there's another slide on here. Oh yes. So the other thing that is interesting about OS build and one of the challenges that we have here, why we have our own customizations and we're not just using Kickstarter or Ansible is that when you're building images, you're not running on the system that's the final target. So there's a lot of things that we can't do. We just can't bake into your image because it's not running in its intended environment yet. So we can put things in the image but they don't get set up until the first boot when it's actually live. But the way everybody does everything these days is well, you know, from what I've seen and we would love if you tell us otherwise. Everybody has enormous kickstart scripts with enormous post scripts that do a million weird things to set up their images in various ways and they never really tell us what happens in there but it's very, very important that we support all of it. And that doesn't really work with the image builder model of how images should get built. We want that to be reliable and, you know, not involve spinning up an entire VM or running all of this hardware where you should be building images like you compile code when you take a bunch of inputs and you get output but you don't have to run an entire VM to do it. So the model that we have today with kickstart and Ansible is that you're running a whole bunch of code in the image to have it sort of build itself from the inside. That doesn't really work. It's not predictable. It's not reliable. It's really hard to troubleshoot what goes on in those things. And some, it's just like sometimes people just do really, really wild stuff in there because it's shell scripts they can do literally anything. And sometimes you see very interesting things happening that like shouldn't be allowed. The police should come if you're doing the sorts of things I've seen in people's kickstarts. So one of the themes of what we're working on here is to try to make image building a first class use case of our packaging and installation toolkit. So it does kind of mean we have to lay out stages. So OS build does very strict stages of like you get a file system, you do some stuff to it and then the next stage gets a file system and some configuration, but they don't share anything else between them. We're not necessarily running code inside the image. It's doing things through the image. So that's sort of, so there is a different model of building than the one that we're used to. And so we can't just say, yeah, we support kickstart because that's on the, you're gonna run this in place model. So these are the things we're trying to figure out is how to do a good job of doing the customization things for your images, what you need to get done, but not just let you have like the big box where you put all of your crazy shell scripts and then like it catches fire and we don't know what happened. We would like to give you good tools to do the things you need to do. So yeah, I think that there's an activity with the UX team. Oh yeah, we have document, yeah, it has documentation. We did good, yay. This is our upstream for Lorax Composer and all of this. It's in Braille 76 and Braille eight and Fedora. And yes, if you want to try these things out, you can just install Cockpit Composer. There's an Ansible Playbook that will install it for you on, you know, everything. And yeah, so the UX, do you wanna? Yeah, yeah. So if you guys are gonna be around tomorrow, I don't know if you've noticed, we have a UXC booth out in the lobby. We have a couple of activities that we're doing related to RHEL. So this first one is Image Builder. It's a card sort of activity. We would love to have you come by and participate in that activity to help us with refining the UI. Also the current challenges we have with customizations, if customizations are important to you, we would also love to just talk with you in more depth about those customizations and how you use them. So just come find us and we'd love to chat with you. And then also, so Sarah is involved in the card sort activity, and then also she has an activity not really related to Image Builder, but just related to RHEL. It's a top tasks survey. So if you are around tomorrow, please come by and take that survey for us. And that's it. Do you guys have any questions for us? I'll take the first one. I'll pass the mic around. So keep your hands up. Okay. So yeah, as people raise their hands, I'll be bringing the mic directly to you. If you happen to be in the middle of a row, it would be helpful if you try to move to the end. We have plenty of time for questions though, because this slot technically hosts till 535. Oh boy, let's wrap y'all. So I definitely like where you're going with the OS build thing because the thing before was painful. So what did you look at for inspiration to try to figure out the strategy and approach of how you wanted this new way of building images to work? Because it seems to have shades of some of the other tools that I've worked on and contributed to, and some of it is kind of different. So I mean, where did you draw inspiration from? You know, I didn't actually start the OS build project, so I don't. I think that was mostly Kai, Zeebers, and Lars. How do you say Lars is that? Karl Litzke. Karl Litzke, yeah. I'm gonna say Tom's last name. And yeah, Tom. God was last name. So I should ask them about that, but I think mostly it's just, this is the obviously same way to do something like an image build is to break it down into like repeatable steps and make sure that you know exactly what the inputs and outputs are of those steps. You know, the inspiration was trying to make code not suck. Yeah, I'm not working on that because I'm definitely much more of a front end developer. The main understanding I have is that Anaconda wasn't built to do this type of thing. And so we need a new and better image build application or tool to do that for us. And so they want to make it easier to troubleshoot and easier to understand like what's happening in each stage of that process. So it should be like a much cleaner way of creating these images so that you can kind of, and providing logs for that whole process. You can understand exactly what's happening. That's my understanding. I actually can't make promises. I can add a little bit to this. So we're compartmentalizing each stage of the image build process. So that way they aren't interacting with each other. And then you can kind of keep track along each stage where you are. And if it errors at any point, you'll be able to get logs for that particular stage. And the errors will only be related to that stage. And then if you are trying to debug it, you'll be able to start again from that stage in the image build process. We're kind of keeping track of the file system through TreeSums, which each stage will generate. So as Will said, you have an input and an output to each stage. And you should ideally have the input. If you give it the same input, you'll get the same output no matter what time it is if you build it from a specific input now and a specific input in that same input a year from now, it should give you the same output since we're compartmentalizing each stage. If there's package changes in some repository you use, that shouldn't affect the build process outside of that one package you use. Right. Yeah, we want each step of the process to be not quite deterministic, but close to deterministic. And that's kind of a big problem with a lot of the existing stuff because yeah, the repos change every day. If you run the same kickstart script two days in a row, you don't get the same image out necessarily. And that's something that really we need to address. And it turns out there's some deep assumptions in the sort of RPM ecosystem that we have to get people to rethink. Like packages shouldn't just disappear just because there's a newer version. Like that's not great for, yeah. I mean, if it's critically broken, yeah, we should revoke it or whatever. Your build should fail if we had to revoke a package because it was critically broken, but in general, we shouldn't be doing that. So there's something we have to do with how repositories are structured. So there's some like weird, deep things we need to do in and around the RPM ecosystem to make image building like reliable and predictable. And so there's a lot of weird work that's a cousin to these things that we're gonna be doing, but OS build is sort of where it starts. Back end-wise anyway. And then so in the front end, some of the advantages we would get with this new OS build is in this example, you can see the first item we have is, I don't know what we're gonna call it. Like in this mockup, it's the compose, but we'll headset and manifest. But it's like those package files that are going into your image file. So at first I create a specific image file type like for Amazon after this OS build process. Then like two weeks, a month later, I could choose to create the exact same image, but for Azure. And I can see like these are all the image files I've created for that compose or that manifest. And then also with the work that Jacob's doing where you can then take that image file and upload it somewhere, being able to then see these are all the uploads that I've kicked off from this image file. And then for each of those things, being able to access the logs. So for the OS build and then for the image file creation and then also for the upload. So being able to know exactly like what's happening at each stage of that process. Next question. I'm gonna pass the microphone, but I've been asked to remind everyone here. We still have plenty of time for questions, but when we do finish questions, I've been asked to remind everyone to please clear the conference room completely and also move out of the Zisk and Lounge, basically the whole space we need to vacate for a period of time. It's recommended that you maybe get dinner and the party is going to be at 7 p.m. I believe in the Terrace Lounge. Also was asked to remind you that you can still sign up for Lightning Talks. They're gonna take signups during the party, I think. So this is great. We had long discussions about this earlier. A couple of things occurred to me while doing the talk that I'd like to ask if you've considered the first is you talked about targeting various cloud providers and one of the trickiest parts I've found in producing images for cloud providers is the stage that you kind of alluded to when you talked about the problems with running Kickstarter or all that first, which is how do I configure this once it gets launched and the standard there is cloud and net, whether we like it or lump it. Do you have, you saw you did a live demo, so I'm gonna ask you if you could actually, can you pull up what it looks like or do you have UI for customizing that specifically that cloud and net or comparable stage in an image build? I don't think we have UI for that yet. Yeah, yeah, we don't have UI for that yet. Yeah, that's wonderful. But yeah, that is considered as part of the effort? Yes, I know that there's, I mean, there's supposed to be somewhere where you can drop in a, I don't know, cloud and net configuration or something like that. It's pretty trivial to drop a file into place or to, yeah, we're considering that and we would love for you to come tell us what you think that should look like because we don't really have a good sense of what people expect there, what they would expect to, how it took. It's kind of like a contract and cloud and net is really wide open, but what it would be nice to be able to do is to look at an image at some point and say, this is what the cloud and net on it expects. This is the contract for what you have to fill in when you then go, say you upload it to Glance for OpenSack, when I deploy this image before I do, I better consider these things. The standard, for instance, is that I'm going to do PKI and I'm going to SSH into the machine where it's done. So I better provide a key pair, right? Well, that actually is provided via cloud and net and it's just known that that's the way that it works. Whereas if there is some way of, first of all, specifying here on the VM, that this is the cloud and net modules that we're putting in there and they might be that they are beyond the, just drop an RPM in there. But this is what I'm going to actually allow in there. It's a configuration file what you enable or what you would ignore for perhaps security reasons. And then to be able to query that and say, this is what it is. So that kind of information, it sounds like you're thinking along those lines. That sounds great. Yeah, what we have right now for adding sort of arbitrary content to the image is, so you may have noticed that when you're working on the blueprint, you commit your changes. On the back end, it's just a Git repo that we're putting them in. So there is actually like a back timeline. You can add other stuff to that Git repo and during the image build, we put that into an RPM and install that RPM into your image. So you can just lay down whatever files you want, wherever you want pretty easily. So like there's a hook where you could probably do the cloud and net thing you want here, but we really do need better information about what common options or things people care about when they're like, I use cloud and net to set up these things, like what that UI would look like, what sort of questions we would be asking. That's what we need to know more about. So there's the technical feasibility of it. Yeah, sure, you can do whatever you want. And how can we help you make sense of this is the more interesting and complicated question. Did that make sense? Yes, very much so, very much so. On the, to extend that question a little bit, I mentioned cloud and net, but we're moving into a core OS world where that stuff is done. The condition, which is earlier, it's similar, it's really, really similar. So is the abstraction gonna be like one or the other? Here's the configuration thing as opposed to cloud and net. Have we thought in terms of multiple stages for that? Cause it's kind of the same problem with Anaconda. We, like, we've thought about it, but yeah, we still don't have clear ideas about like does everybody in the world use cloud and net or do they want other options? So we're, we're sort of approaching it from a, let's see what people ask for and see how, what we can do a good job of. If I understand it, core OS started off with cloud and net and then moved it to ignition cause they needed more. Yeah, they needed to do some more stuff. I think I can just send you one other question on, you talked about, you used the word manifest earlier, which is interesting cause you have blueprints which say, this is how I want to build it. And if you use the word manifest in an airplane mode, it's actually who are the passengers who actually loaded the plane as opposed to the passenger list. And so you talked about making these things reproducible. Is there an artifact which says, and separate from like doing RPM query of, this is from, from like, I asked for HDTBD, I got HDB, blah, blah, blah. Can I build that actual manifest and use that to rerun the blueprint more explicitly to reproduce that stage? Yes, it's all sort of implicit in the UI, but yeah, we do write out a, we do write out a manifest that has the exact, or I think we do, has the exact versions of what went in to that stage. And OS build keeps, yeah, it keeps its inputs. So somewhere in there, yes, there will be an actual manifest that has the actual listing. And yeah, we don't expose that in the UI anywhere right now, I don't think. We don't expose it right now. But yeah, it would be a thing that you could fetch and then feed into other stuff is the intent there. So you may have already mentioned this and so I apologize, but you, so OS build is kind of meant to build images retroactively, right, or reproduce builds retroactively. If I knew from the start that I wanted to build, say, an AMI and a QCOW image, could I do both of those in one step or would I have to build the first and then go back and build the second? That's an interesting question. In the UI anyway, you have to do them as separate things. There is a CLI for this, so you could kick off both of those at the same time. And the problem is with Anaconda, they're two totally separate builds because Anaconda starts from basically booting up with the machine. With OS build, we would be able to do both of those at the same time in theory. But in practice, this is one of the interesting UX problems is in what order do we ask these questions? We don't ask you what kind of image you're going to build at first because it's not really relevant until you're actually building an image. But in theory, we could do that earlier. So at this point in time, this is the work that Jacob's currently doing, where we do ask you what image type you want to create and if you choose a specific image type, you might have other upload options available to it. So I guess my question for you is at the time that you are building this image, it sounds like you would want to be able to kick off multiple image files from this creation process. Do I understand you correctly? Yes. Okay. Yeah, that's very interesting. That's not something I use case that I considered, but it's definitely an interesting one to consider for this. So as I was walking, I think you mentioned the problem that often you'll go to repeat a build of an image, but the RPM package repo no longer has that old version of an RPM, is that correct? Yes. So I work on a piece of open source software called pulp that I think I might suggest would solve your problem. What you could do is instead of directly downloading those RPMs from that repo, you would set up pulp and pulp is used for, is it managed as package repos? It's commonly used to either create your own or to sync or mirror one from the internet or whatever. In both cases, it can create, keep old versions or a snapshot to those repos. So the idea is whenever you go to retrieve these RPMs, you'd instead say, hey, pulp, get all the metadata for this repo at this current time and then lazy sync of the RPMs that you request. So for a package repo like Apple, it'll sync the metadata for all 10,000 packages, but then the packages you requested, like 200 would be kept on disk permanently in that version. Yeah, there are, sorry. Yeah, there are definitely various tools that do this sort of thing that will work around with the fact that the RPM ecosystem we designed drops packages randomly. There are a few ways to work around that pulp. And yeah, I've heard good things about pulp, but if I can rant for just a second, why don't we just fix that and not throw out packages just because they're old? Would be my suggestion. This is one of the things I'm going to be talking about or trying to start on the Fedora develop list soon. The reason that they disappear basically from the mirrors at this point is that it gets really big over time because RPM is a terrible storage format for a lot of very, very similar data. So we might need a new repo format, but if we had a different repo format, like I've been running some tests and if you have a repo format that deduplicates at the file level of the content, you can store everything, all of Fedora 29, every package that's ever been released for Fedora 29 takes 15% less space than the current repos do if you're deduplicating at the file level. So like there's no technical, the only technical reason that we have to throw out old packages is because it makes the metadata too big and it makes the repo itself very large and the mirrors are getting really mad at us. But all we did- It's not a lot of data to solve. Exactly. Yes, the main problem that we face is the image building as a first class use case of the RPM ecosystem and OVANACONDA. Like it wasn't really designed for this. So we're having to find these places and try to fix them. So yes, I think OSBuild currently has a way of making sure that it is holding the data that it needs for the blueprints that it knows about. Is that, did I just make that up? I'm not sure. Okay, don't quote me on that. I just made that up. Okay, that's fine. That's a dream I had and please don't. Unfortunately, my dreams don't yet turn into code automatically. So you might have to wait on that one. But yeah, I do want to fix the sort of deeper problem of like why we get rid of packages at all. I secretly just want to write a new package manager in Rust but they won't let me just do that. So I'm finding an excuse this way. Interestingly, I'm sorry I'm going off way on a tangent here, but the repo format that I'm playing with looks a lot like SquashFS. Because SquashFS does some- No. Yes, no. Right, it would be cool if our file format was like not designed for putting on tapes. Yeah, one of the other things is that it would be nice if you could download individual files rather than having to download the entire RPM payload and then install it. And then frequently install the entire payload and throw out all but one file because you don't want, you know, you only wanted one thing. The amount of data, the amount of time and data that we waste in the way that we do things now is absurd. Something like 95% of all the data you download when you update your system is thrown out or duplicated elsewhere. So there's a lot to do there to make image building more reliable. So like we've done really, really well with the tools that we had but to make this reliable, reproducible, safe, fast, efficient, we're gonna have to do some weird stuff. So thanks. Yeah, yeah, SquashFest would do with. Yes, it will, yeah, so. Going, yes, the idea is basically to have a repo that looks sort of like a git pack file, content addressable. But with small binary indexes, deduplicating at the file level, probably doing binary diffs of files because every build is like 99% the same data as the previous build. So why do we store a complete copy of the package every single time? It adds up really quickly. So yeah, there is a whole lot of talk that will have to happen on Fedora developer list and elsewhere about these things. If you're interested in attacking those sorts of problems, please do come talk to me because that's, yeah. That's the big challenge is making those sorts of systemic changes. Yeah, so I think going along with what you were saying, I think when you said deduplication, I mean like complete identical RPMs were doing the fact that this newer version of RPM is the content certainly changed a few percent. I'm sorry, can you say that again? When you talked about deduplication, are you referring to the fact that across repo, I know, are you referring to the fact that this package got updated edge 100 megs but only like one megabyte chains across versions? Right. Yeah, I mean deduplicating block storage or whatever can't help with that too much because it's all within tar.gz or tar.xc. Right. Yeah, keeping the, it's unfortunate that the way that we store the data makes that a hard problem. It doesn't have to be, it's not actually a hard problem if we don't store our data in RPMs. So we just need to store it in something slightly different that could be converted to and from RPMs maybe is the sort of idea. But yeah, deduplication at the file level isn't super hard to do. I mean, that's what Git does. It's a well-studied problem. It's really pretty easy to do. We can make stuff that's RPM compatible without being RPM and right. Yeah, it's kind of amazing how much of the container world is just moving tar balls around. It seems like we'd have a better story for some of these things about how you make images and construct them and put them together and we just don't. So somebody's going to do it. Anyway. What's that? Yeah, so OS3, OS3 is great, but it's... OS3 is designed specifically for the use case of having multi, like it wants to be on disk. It wants to use hard links and it wants to be on your disk. It's not a great repo format. It's not something you want to put on the mirrors. You know, because every time you update you have to do like 15 million requests to get all, because it's per file and OS3 kind of gets, or if you try and put OS3 repo on OS3, it gets really weird because it uses, you know, 15 million buckets or whatever. So the current, that stuff, that idea, the OS3 idea is sound in how you store the data. But they left out, you know, you got most to get, you left out the pack files. Pack files actually are a good idea. And RPM, that's not a good pack file format either. So, yeah. Oh, yeah. Oh, yeah. Yeah, oh, yeah, no. Yes, an R path inspired, interestingly, Solaris, when they rewrote their packaging system, which puts all of their packages into a big content store where they address all the files by their hashes and they only download the files they need. Weird. So yeah, once again, I'm stealing ideas from Sun to fix our packaging, which is exactly how we got RPM 20 years ago. Sorry, I think. I don't know. The Sun guys I have talked to are not mad at me about it, so. Although it is funny to talk to them about some of these things, because that was a very large change. They changed their entire packaging system to this totally different thing. And I said, I was like, what did the customers think about that? Like, can you just do that? And they're like, yeah, it's fine. And I was like, we can't really just change the entire system in the open source world without anybody yelling at you. And they're like, yeah, we know. Must be hard for you. I'm like, they're kind of smug. Whatever. Yeah, that's the thing. Yeah, various places. Were there any other questions? Anybody else just want me to like rant about RPM or? Are there follow ups? Can't really have like four minutes left. How much time? Four minutes. Four minutes? Was there anything else? Thanks very much. Hold up, we have one more question. Yeah. Wouldn't Delta RPM solve this problem? Like, couldn't you keep old Delta RPMs? I propose we not dig too deep into the theoretical challenges. Yeah, I don't want to get too deep into that stuff. Really, that's completely different than image building. Yeah. Also, fuck Delta RPM. Ah! It is terrible. Come find me and buy me a drink and I will rant at you about how terrible Delta RPM is. Conceptually, yeah, that would fix the problem, but the implementation is so bad that I have nightmares about it, so. Anyway. Any other questions about image builder? Yeah. All right, then. Thanks very much. Thank you very much. And if you're around tomorrow, come talk to us on the lobby. Yeah. Please. And as a reminder, again, I know I have chilled you all already, but we have to clear the whole area. This room is just in the ground.