 Hello, everyone, so I'm pleased to welcome Ion McLeod. Leo, OK. Developer from Red Hat, he's working also in the upstream developer for Image Factory and in the Red Hat OSS. So please welcome Ion McLeod. Thank you. Can everyone hear me all right? OK. So as he said, my name's Ian McLeod. And I'm here to give a talk called Image Factory. No clever subtitle. Thank you for coming, even though there's no actual sexy name for this talk. I appreciate it. I'm going to start out with a demo. Live wire, yeah, already. So I'm kind of going to start out with a demo. So one of the issues that I've had with this project for a while is that with all due respect to the folks that manage our installer and some of the other ones, it's very difficult to make the installation of a system look exciting. It's not like seeing a demo of cockpit where you're just blown away and you think this is amazing. Thank you. So what I'm going to do is I'm going to kick off a task here. Some of you may have seen earlier, I took a photo of the room. And we're going to incorporate that photo into a full system image that I'm going to build, hopefully, while I'm talking. Maybe it'll work. Maybe it won't. We'll see in about 15 minutes. So I'm just going to run this script to workstation build. We see some things happening. We'll talk about what those are. I will have this in the background. OK, something's going on. Great. We'll ignore that for now. And we'll go back to the presentation. OK, so what are my goals here? I imagine some of you have already heard of what image factory is. Maybe you know that it's used in a few places in the infrastructure. I'd like to familiarize everyone more with what it is, how it works. I'd also like to talk about a few key pieces of underlying infrastructure and software tooling that are involved in image factory. There's some very powerful underlying tools that are much larger and broader than this application that I think are worth knowing about. So I'll talk a little bit about those. I would certainly hope that I'll end up with a few additional users and maybe even contributors. That's something that I have struggled with for various reasons. I will admit I was also planning on cleaning up a little bit more of the project infrastructure and that speaking publicly about it might have incentivized me to do that. I've done some of that. Hopefully I'll do a little bit more. A brief trigger warning, there will be a little bit of Python and a little bit of XML in this presentation. Just a little bit, like fleeting nudity. It'll be fleeting Python. So before I actually talk in specific detail about what image factory is or what the architecture is, I want to talk about its history simply because that influenced some of the design. So image factory, when it was conceived, it was one of a number of products that were essentially spun up from scratch as part of a project called AOLIS, which was a cloud aggregation project that originated at Red Hat. The general idea being that people could use multiple public and private cloud providers from a single console in a way that was largely transparent to them. So you could have EC2 open stack and other things in the background, and you would have one console that managed all of that. So one of the functions was going to be something that could build images that could be targeted at these variety of clouds. We knew that from the very beginning that people were going to potentially use OSs other than REL on this. So it was designed to be OS agnostic. We even had original minimal support for Windows installs, if you can believe it. And it was meant to be a component. So it wasn't software that was necessarily something that you were going to use on your desktop. It was actually a restful service that was a component of a larger system. And finally, it was meant to be an end-to-end solution, which is to say it didn't necessarily output something that needed to be further manipulated or uploaded or pushed somewhere. In the end, you wanted something that was essentially one step away from launch. It was an AMI. It was an image on a RevM server or vSphere server, something that was essentially right there and ready. What we do with it now is a little bit different, but keep in mind that it evolved from this premise. Starting in around 2012, there was a need to sort of revisit the way that images were being built inside of the Fedora infrastructure in particular. And working with Jake Gruguski, we created a plug-in to Koji that incorporates Image Factory as a module instead of a service. So we're not actually running the Image Factory restful service. In fact, that's not a really a use case. I think it's used at all anymore, although we do have the code in there still. It turned out to be flexible enough that we were able to import it as if it were a library. Colin Walters, who's IC is in the audience, has actually used it for the RPMOS Tree Toolbox. It's another application where you want to create a full system image. And in the process of doing this, we added a command line tool to it. We had a very basic one for debugging originally. We've evolved it to the point where I think it's at least usable for most people. You no longer have to write a JSON file and pass that in as a piece of input, although that's an area where we could use some improvement to. And the most significant thing I think we've changed is although it was originally a cloud image building service, we now target some things that are very much not clouds, public or private. So we added, fairly early on, we added support for this thing called OVA, which is the single file representation of the open virtualization format. We've more recently added the ability to build Docker base images. And in fact, it's used internally in Red Hat and elsewhere to build Docker base images. This is something that there isn't really any inbuilt tooling in Docker to do this necessarily. And more recently than that, we've added support for Vagrant. Who has used or heard of Vagrant in here? I just have a show of hands. It's fantastic tool. We'll talk about it more. And as of Fosdum, I got, actually, it's thanks to Adam Miller, put someone from CoreOS in touch with me. I have been promised a patch to add rocket support for it. What do they call that? Adam, it's not actually called a rocket image. It's called container image specification format. Cool. So that's what it is now. Let's just check on our install. All right, it doesn't want to show me, so forget about that. Hopefully, it's churning away in the background. You see why I don't want to demo it or move it. All right, so let's talk about what Factory Actory does. We conceived the factory as having three primary artifacts. So we have base target and provider. A base image is the system image that you want to build in essentially a cloud or provider neutral format. So it's not necessarily been adapted to go in its final destination, but it has all of the things installed. It has the basic structure and format that you want. Now, in point of fact, the way we use it, it means that it's a KVM image on a server or workstation somewhere that has all of the components you want, but it isn't necessarily ready to go in the final destination. We do that by turning it into a target image, which is this base modified for whatever provider that you're pushing it to. So this can be something, actually, this can be almost a no op if the provider is, for example, revm. Revm is KVM-based. We already have a KVM image. We essentially copy it. It can be something far more profound if your target is, for example, the original version of EC2's image format. EC2, if anyone used it in the 2006, 2007, 2008 time frame, wanted an image that was a single file image, it was a single partition image, essentially. So we have code and factory for EC2 that flattens that out, turns it into that format, and it's ready to be uploaded. And finally, a provider image is the image that has been customized for a particular target and actually uploaded to it and is essentially ready to go. The most obvious example of this is an AMI. So we've done the flattening in the EC2 case. We've uploaded it to a specific region in EC2, and we actually have it one step away from launch. We do this with a plug-in model, so we have two different types of plug-ins. There are plug-ins for creating the base images and there are plug-ins for the various targets. For a long time, we had two base plug-ins, although the Nova one has now been deprecated. So in point of fact, there's really only one thing doing that, and I'll talk about Oz, the primary base plug-in, in a moment. But we have a lot of different plug-ins for different targets and providers, and this structure has allowed us to do things like add support for Docker base images, which is something we really didn't think we would do, was not on the radar when we conceived the project originally. So let's talk a little bit about the first step in this process in all cases, which is creating this base image, this image that has the content that you want, not necessarily yet adapted for the cloud provider. So that is done in the primary plug-in by a tool, an underlying tool called Oz. Oz is not part of Image Factory, it's a separate upstream project. The maintainer of Oz is a guy named Chris Lanza, who was at Red Hat for many years, although he has since gone on to build robots at iRobot, and apparently not the Roomba, but the much bigger ones that are more menacing. So, sometimes, yeah Roomba's can be scary too sometimes. But we have, I'm a contributor to Oz and we have a very good continuing relationship with that project, haven't had to do anything like fork it or even consider that. So Oz creates an image by installing the image using the native installer, using virtualization. Now if anyone's familiar with the way that we originally approached cloud images in Fedoro, it was not always this way. Who's heard of appliance creator? Where Pete's going, cool. So, back in the day, before virtualization was really ubiquitous, we would do essentially a yum transaction targeted at a subdirectory, which was probably a loopback-mounted file system on a disk image somewhere, right? The issue with this was over time, that works great if you're trying to build an F-15 image on an F-15 host. As the difference between the environment that you are trying to build from and the environment you're trying to target with that build increases, we ran into certain odd issues, right? Differences in the behavior of yum. More recently, we would have had the issue that yum isn't even in the package management. DNF isn't even in older Fedora, so you couldn't build a Fedora 23 image using DNF on a Fedora that didn't have DNF in it. And certainly in the build infrastructure environment, we had machines, I don't know exactly how much detail should go into on this, but we want to preserve the ability to use a fairly old OS to build newer OSes. And if we use virtualization, all this goes away, right? We have a wonderful team in Red Hat that develops a piece of software called Anaconda that is focused on creating an installed system. So that is why, ultimately, we went with this model of using virtualization and using the native installer. It's something that already has to work. We have VRT available, so why not do it? Another thing that Oz does, which we don't necessarily do with all of the use cases now, but it was originally designed, and will still do for you a minimal installation with a known working minimal kickstart image in our case, but some of the equivalent formats for other distributions, why? Well, who here has done a kickstart file that has substantial customizations in it? And leave your hand up if it worked perfectly the first time you tried it, right? It's a very difficult environment to debug in, right? It's improved, and I'm absolutely not saying anything bad about Anaconda, but it's not the running system, right? You're doing things a lot of the times that you really would prefer to do when the system is up and running in a normal multi-user run level, and you're kind of trying to shoehorn it into the installer. I've done some things to try and get Docker base images cached on the atomic OS image, for example, in a few other places that just, they're not pretty. So that is why we've done what Oz does, which is that it customizes in the running system. It does some things to actually bring the system up, do some further customization before it goes on and does the rest of the steps in the image factory workflow. And finally, we inventory the resulting system. So a lot of the times you can't necessarily know just from the inputted kickstart or auto install file what is on the resulting image, right? There's a lot of dependency derivation. There's groups that get exploded out into much larger collections of packages. You don't necessarily know what was in the repo at the exact time that you did the installation. So we do a snapshot, we add that metadata in, and that's been very desirable for the build system work use case, just a second. So how does Oz do this? Well, one thing Oz knows is it knows a great deal about how the install media and potential install sources for a lot of OSes are structured, right? So it knows if you give it a URL for Fedora that it needs to look for a directory called images, and there will be a kernel and a RAM disk there that will launch the installer, right? This is one of the things that Oz uses to its advantage. It knows how to use that knowledge to then create a VM that will launch directly into an install environment, in our case, a kickstart environment, right? And it knows once that is finished, and particularly if it's a minimal install, it knows how to inspect and modify that VM so that it can boot it up again and SSH into it to do what I was just talking about, do this customization in the full-fledged system that's a little easier to debug, a little more likely to succeed and doesn't necessarily require some of the contortions that we have to do if we are installing inside of kickstart. And finally, it knows how to pivot so that this VM that was originally booted, say, from a CD or directly from a kernel, as we'll see in a minute, can go back and boot in the normal way. Adam, yes? It is supposed to be LibGestFS. Sorry about that. I was just curious if there was a different tool set that you were using. Nope, nope. Pay attention to the last bullet. It's correct on the last bullet. And it knows how to undo that, which is the final point there, right? Once it's done all of this, it knows how to undo some of the things that it did to make it accessible. So let's talk about what enables this, right? Libvert. I suspect most of you have heard of Libvert, right? Libvert is a fantastic virtualization management API. It supports a huge number of languages and quite a few, okay, there's another. How did that end up as Hypervi or Pi? I don't know. Anyway, multi-language, multi-hypervisor, virtualization management API. And for factory and for us, right? One of the key things it lets us do is we can support an environment that has QMU or has KVM and we don't really need to worry except for a few little details about which one we're using, right? It's also as more of the secondary architectures or non-X8664 architectures have come into play. It's allowed us, there's been very few changes surprisingly that we've had to make to the underlying code in order to support launching installations on that. And finally, it's been a stable format. So we don't need to worry about changes in the command line interface for QMU, right? We have been able to have a block of code that creates these VMs that has been relatively stable over time, right? So let me give a quick example here. This first bit of snippet, again, warning, XML ahead is the change that we have to make to our virtual machine definition between when we launch the installer and when we launch the VM again to do some customization, right? And this is it. These two sections of the... This is the only section that has to change, right? So the section on the top is when we do the installation. So as I said, Oz knows where to get the kernel and RAM disk for the installation. It knows what command line options to add in order to make the booted virtual machine start doing an installation right away. And that is how it does the initial run of the virtual machine. When that is completed successfully, we switch to the second version of this, which just says boot off the hard drive, right? And that's all we have to do. It's safe, it's programmatic, it's easy, it hasn't had to change in forever, right? We're insulated from any differences in the behavior of QMU versus KVM, QMU and whatnot. It's just been very nice for us. Another thing that we've been able to do with this as I said a moment ago is that when we wanted to add support for... Go ahead, did you have a question in the back? No? Okay. When we wanted to add support for detecting different types of virtualization and other architecture types, we did it like this, right? Oh, I'm sorry, Nikola... If anyone was in Nikola's presentation just before mine, showed some XML from the Libvert capabilities feature. So I threw this in here at the last minute. Libvert will tell you in a structured form what capabilities the hypervisor has, right? And this is an example. This is actually a snippet. It's been edited, of course, from my workstation that is saying essentially for X8664, you can use KVM and this is the emulator. But for ARM32 bit, ARM7L, we still have an emulator, it's just QMU, right? And that's because I have installed QMU on my laptop, right? It tells you this information in this easy to digest form. You don't have to go out and check if the module's loaded, check if the module will load and if it fails, right? All of this is just given to you as a nice, clean piece of structured data called the capabilities. And what that means is that this piece of code, again, DangerPython, is all we needed to detect QMU versus KVM and by and large, it was the only thing we needed to support multiple architectures falling back to QMU when KVM isn't available, right? So we look for the presence of that KVM field in the XML I just showed you. If so, we use it. If not, we look for QMU, which you can have simply by installing the right fully emulated version of QMU and if you have it, that's great. Otherwise, we say, sorry, we need virtualization to do our job. So that was it. 32-bit ARM Anaconda installations work on my workstation with the addition of this to the code base. So that's been very powerful and very helpful. Did I spell LibGestFS correctly here? Yes, I did, fantastic. So another key technology is LibGestFS. I hope many of you have heard of LibGestFS before as well. LibGestFS is a very safe, extremely flexible and extremely rich interface to manipulate virtual machine images. And in fact, it can manipulate block device storage more generally. It has quite a few different access paths. How safe is it? First of all, who has heard of KVM, I'm sorry, LibGestFS and knows that every time you make a LibGestFS, call you're actually communicating with a virtual machine on your system. Good, all right. It's true. Every time you make a call into LibGestFS, LibGestFS, it's calling into a virtual machine. A virtual machine that it has constructed dynamically the first time that you try and use it, right? It does this from the file system that you already have. What this means is you do not need to be root to manipulate disk images. So I think most people are probably familiar with the fact that you can create a loopback device or you can through device map or use the tool called KpartX and you can mount a disk image into your file system. But you need to be root to do that. And you're exposing that file system to your system. So you're potentially open to some fairly obscure hacks but Richard Jones could tell us more about those that are somewhat dangerous. So this is all insulated inside of a VM. You don't need to be root. How flexible is it? There are actually API calls inside of it that can modify the Windows registry if the virtual machine image is Windows, right? That's pretty amazing to me. I don't actually use it inside a factory but that's the level of detail that it goes into. And how rich, other than Richard Jones, if you've ever met rich, there's just a huge number of API calls, right? So a lot of the things that you might do in a bash script are mirrored in the APIs. It supports many, many different image types and it supports a huge number of access paths. You can actually point libguestfs at an image. You can point it at the Fedora cloud image via HTTP and you could mount that up and manipulate it via HTTP. And several others, which I list there, including our favorites these days, Seth and Gluster, right? So really powerful. It's really an ecosystem onto itself and if you haven't looked at it, I encourage you to give it a try. So let's just give one example of what libguestfs does here. So one of the things we do in both OS and factories, we have the output of our initial installation and then we need to sort of bootstrap it into the libguestfs environment and manipulate it further. Libguestfs has some fairly very well maintained code that sort of follows along with improvements and changes in distributions to inspect what OS is on a disk, show you what all the mount points are and then let you mount them up. So this is the last little bit of Python that I'll show you. So what we're doing here, we're initializing the guestfs environment. We're pointing any arbitrary disk file at it and then we're just telling libguestfs to go and have a look, see what's in it. It tells us if, and I've removed some of the error checking from this code for brevity, if it finds something that will tell us what the root file system is, it'll tell us the partition or the logical volume that is the root file system and it will give us all of the mount points that are present on that system. So it'll inspect fstab, it will discover where other things are and it will tell you how to mount them up and then you can mount them and this is not by the way a mount like onto the system, this is mounting them within this libguestfs environment that we can then manipulate programmatically later on in Oz and Image Factory. At the bottom, I show if you don't particularly care to manipulate your file system with Python, but you would still like to have the protections and flexibility that comes with libguestfs, you can execute that single command, the guest mount command at the bottom and it will mount the entire file, inspect the file system, mount all of the partitions and logical volumes into your file system at your chosen mount point via fuse even if you're not root, right? So really consider whether you want to be doing loopback mounts and kpartx and whatnot when this is available from libguestfs, any very valuable tool, right? So those are the two underlying technologies. Let's go on to talk about some of the actual real cloud and other destinations that we can support with Image Factory. So the very first thing obviously in the 800 pound gorilla in the room in the cloud space is EC2. And the EC2 provider is still probably the biggest and most complicated one that we have in Image Factory. So EC2, as I said, started out requiring a sort of fairly non-standard, atypical mode of uploading your image. So you would have to flatten it and you would upload it to a series of buckets in their block storage, not the block storage, their object storage service called S3. So the EC2 provider is originally conceived, knew how to do all of that. It would take the input image, it would flatten it out, it would then upload it to S3 and produce an AMI. Over time, Amazon has added things that start to look more and more like traditional virtual machines. So they have block storage now, you can upload things to something that looks more like a virtual disk. Although it's until very recently, it still wasn't truly full virtualization. It would sort of simulate the behavior of Grub with something called PV Grub. And more recently than that, they have finally added something that's more akin to what we're familiar with if you've ever used OpenStack, which is full virtualization. All of these are subtly different, right? There's a fair amount of EC2 specific knowledge necessary to manipulate an image and upload an image in each of these formats. So we know about things like, for example, if I have an image that is designed to boot via Grub, I have to tell EC2 that it should look for a Grub configuration file in it. And I do that by saying boot it as if the kernel were this particular thing called PV Grub, right? If I want to upload a block-based image, I actually have to, I believe this is still the case, I can't just directly upload that then. So we have knowledge of things. We actually maintain utility AMIs in each region that we can boot and actually run and use those to populate the underlying block storage with the image that has been built in ImageFactory. A brief, and this provider actually is used by quite a few people. The most exciting one I know about is Brookhaven National Lab, so they've been using EC2 to analyze some of the data from CERN that comes out of the Atlas experiment, and they are using this feature to build their compute images that they launch on EC2. A brief warning about this if you do start to dive into it is that the S3 components, which is the very earliest and oldest way of uploading an image to Amazon, one that they're actually trying to sunset, may be a little bit bit rotted. So that's EC2. Some other ones, some non-cloud ones, right? Docker. Docker Docker. I'm sorry, I had to. So a Docker image isn't even something that's bootable, right? So the way that we've implemented this, this is actually a fairly straightforward thing to add as a new target in ImageFactory. So we take a base image and we extract the root file system content from it using libgesta.fs and we tar it up, right? And that's it. Well, I'm sorry, no, that's one. And then we wrap some metadata around it that Docker is expecting so that it's something you can just Docker load, right? So I believe that, Adam, we're actually using that. Any see in the Fedora infrastructure? To create Docker-based images, so that's good. And probably one of the most exciting ones, for me anyway, is Vagrant. So a lot of you said that Vagrant was something you were familiar with. So Vagrant supports quite a variety of targets or destinations, right? So it supports virtual box, which I think is by far the most common one. We have a plugin for Libvert KVM that you can use. There's support for VMware and even for Hyper-V, right? So it provides this wonderful way to very quickly and with very little fuss, spin up a virtual machine and gain access to it on a huge variety of systems. It's useful, I think, it's really useful even if you're on an environment like Linux where we already have some tools to spin up VMs. I think it's the best game in town. So in order to support this and generate Vagrant images, you can take advantage of these features that we already have in Image Factory, right? Some of these targets require that we turn the image into a different format. We change the underlying disk image format from QCOW to VMDK if the target is VMware, for example. Or we turn it into VHD if the target is Hyper-V. And then we can layer in a same way that's somewhat similar to the way they do with Docker, we can layer some XML and some other metadata that's required on a case-by-case basis for these back-end Vagrant providers and wrap that all up. And the result is a file that you can load into Vagrant and run. I don't think I'm gonna talk about Indirection, except to say very briefly, we have a feature in this where you can actually build an image with another image. We originally did this so we could run the live CD tools. We have environments where you want, you don't necessarily, you have a tool that you don't wanna run in the host context. You wanna run it in only an F22 or you wanna only run it in F23. I think we actually also did it to create Docker base images for a little while internally. So Indirection is a feature that I would certainly like to explore more that lets you essentially, rather than having the code inside of image factory do the work to transform the image, you actually build an image that itself transforms, does the transformation into another image format. So that adds quite a bit of flexibility. All right, yes, this was my example. So let's see if this actually worked. All right, so what actually you know what I should, sorry, this is what I actually did, right? So finally, these are actually some image factory commands up here. So what has happened in the background while I was talking is I built a base image using a kickstart file, not a minimal kickstart file but a workstation kickstart file. When that finished, I turned it into a target image of type revm, which again is just to say I turned it into a QCOW2 image in this case. And then I did another target transformation to turn it into a vagrant libvert vagrant box. And I will now show you, because I'm such a big fan of vagrant, how we turn that into something, how we launch that on this system, so. Okay, so I'm gonna do a vagrant box add devconf real, double check my spelling. So this is going to load it into the vagrant environment. I can take questions while this is happening. I can try. Let's do this. Okay. And I already have a vagrant file in this directory that references this box, so I'm gonna just bring it up. I should say this is a little bit of a hack. And I'm gonna have to ask Colin some questions about GNOME after the presentation because I felt like this should have been a little easier. No offense. So one of the things you might know about vagrant is very common to have a default password of vagrant. Thank you. So let's see what we get. All right, look at that. Can you please verify that this image was taken just before I started my presentation? All right, so thank you. Hey. So what really happened here, and I appreciate the more astute among you may notice that I could have done this in any number of ways, like for example, just shoving that file into a box that I already had running. But as I say, I showed you what actually happened here is that we did a full installation of Fedora 23 using a custom kickstart file. We pulled that in at kickstart time. We transformed it into a different disk image format and then we wrapped it in vagrant metadata to turn it into something that could be launched by this tool. I had originally hoped to do this on a Mac that I happened to have only because I needed to support Image Factory, by the way. I was forced to have it. But the images are fairly large so I didn't feel like I had enough time to make the transfer happen. But this could be any of them. So that is the demo. That's a weight off my mind. And I can get back to the pre-zo. All right, so that was that. All right, there are a lot of things I would like to improve about this. I say documentation and what I should probably say both documentation and community, right? I have started to put together some examples. I have a separate GitHub repo for that. I should have put the link in here, but I will link that from the read me on the top level project page. One thing that the Fedora infrastructure people have asked us for is that, particularly when you're doing custom kickstarts or custom install files, we would like to have a little bit more detail about what it's doing. This, again, is something we tried to get away from in the original architecture for OS where we actually did the customization on a fully fledged system. For historical reasons, we tend to do a lot of that inside of kickstart files at Red Hat, so we'd like to improve that logging and maybe even do some live debugging. If an install goes wrong, we'd like to be able to live debug it. You saw when I did that multi, when I did that vagrant box build, there's actually no way to just do that as a one shot deal. You have to do a series of commands. Again, this is more of the heritage of it being a REST interface. So I'd like to make that multi step stuff a little bit better. It is being used as a module, as I said, although it wasn't originally designed to, so I think there's some things that we could do to make that a little bit cleaner. I talked a little bit about indirection. It's very powerful and we have, in fact, used it, but there's two problems with it really. One is that in some ways it's almost too powerful. I don't think we don't really want to allow people to do any arbitrary thing inside of a VM that they've spun up to modify another image. So we'd like to potentially look at things like templating some of the utility images, some of these images that are used to manipulate images to create other images, and maybe sort of curating those as well. So maybe we have an official image that is used to create Docker-based images, or we have a few of them based on the version of Docker that we're trying to, or we have, in the Fedora context, we have the official images that's used to run LMC to create the live CD ISO, right? And finally, there are a couple of other clouds these days, including Red Hat's partners, Microsoft, who have a cloud called Azure that we don't support as cloud providers, and I've been looking into supporting those as well. So view this also as if this sounds at all interesting to you, any of these would be things I would love to have some assistance on in the upstream community. So that's it. Are there any questions in the back? I haven't heard of Brick Builder. Okay. There are certainly a couple of other tools. There's a tool called VeeWe that the vagrant people put together. We have some, I had actually meant to highlight that. LibVert itself has a tool called Vert Install, which is fairly robust, so that's cool. It's customized. Yeah, Vert customized, right, Rich LibGestFS has one as well, so I think there was a question over there. Yes. It's not, it can do that, but Red Hat's images have billing information associated with them where you have to pay per hour, and the API to do that, to upload an image that's been sanctioned is not one that is widely available, so no. It's used to build the image that is input into that API, but some of the other things that are in the EC2 plugin that people like Brookhaven use is not used internally at Red Hat. So, anything else? All right. Oh, Will, yes, go ahead. No, Oz actually supports more things than Image Factory supports. Oz has, Oz has had Windows support. It doesn't have Windows customization support, but it will do basic bootstrapping of a Windows. It does Debian, Ubuntu, Mandrake, Suse, a few other things that are even more, that are obscure, so. Yes. Yes. Okay, cool. That would improve my demo next time, so. Okay, so. Cool. I have to admit that is, there are some people that use Vagrant for Workstation, like actual graphical workstations. It's not as common, and certainly not one of the ones that we focused on, so, but that's good to know. Was there one in the back? Yes. Will it handle custom display output, did you say? Disclayout. Disclayout, yes it will. So you can, if you use, if you use the default, use the default install files from Oz, you don't get to choose your disk layout, but you can provide your own kickstart. I did in this example, you can do a custom disk layout, but you just do it by providing the OS-specific auto installation files, so kickstart for us, different things for other distributions. Thank you very much. Congratulations. I was always wondering, I was expecting Dennis Gilmour to come in here, and we pushed his feature request. I actually asked Dennis to take you on the round. We were able to test it out for today. Can we get all of that done? Yeah, we'll go back to what we said last week. All right. So, we're going to go ahead and bring it back in here. Yeah. Yeah. Yeah. So, in a few minutes, we will start with the lightning talks, which concludes the day. And we will have five lightning talks in this room, 10 minutes each one, and then it will be the end of the day. Just general remarks, remember to vote for the sessions you attended. You can do it from the website, from the application, the mobile application. If you are here tomorrow, remember there will be a party in the evening tomorrow. And the tickets will be available from the afternoon. And so.