 Thanks for joining us today, everyone. We'll be talking about image management. Are your images golden, gilded, or tarnished? And this is us. We're all from Symantec. I'm Brad. I primarily work on Horizon. And this is Richard, who works on our image management solution. And this is Tim, who's been working on Glance. First, we'll cover why image management is important in your cloud. Then we'll talk about building, validating, and distributing images. Then eliminating vulnerabilities in live VMs. Delegating image curation responsibilities. Then we'll have a demo of a new feature in Glance called community images. Then we'll talk about unified image management in hybrid clouds. And then we'll have demos of the image pipeline that we use in Symantec. And a technology we call the Dominator. So why is image management important in your cloud? For one thing, protecting capacity. So using quotas to restrict how much a user can use in your cloud as far as storage capacity. And this prevents anyone project from creating a bunch of images and taking up all the storage in the cloud. Preventing confusion. So users need to know which images have been blessed by an admin and which are just random uploads from other users that might be full of vulnerabilities. Exposing provenance. Or in other words, knowing where an image came from or who made it available in the first place. Ensuring freshness. So is the image free of vulnerabilities and bugs? Is it the latest security patches that are in that image? Controlling publication and delegation of images. So in our case, we only let cloud admins make an image public. And all other image management delegation for other classes of images is delegated to the users. Managing images across hybrid clouds. And so you may be using the same images across different cloud providers. But managing those images should look the same across the clouds. And we all do some form of image management. But often it looks very ugly. So in the beginning we start out with a good situation where we build some good images, deploy them into the VMs and here we have golden images. And we're good to start out with. But either through being too lazy or too scared to create a new image and push it out to the running VMs, you get these running VMs that vulnerabilities are discovered in the images, new bugs are discovered and so they tarnish and that gets us into a bad situation. And so what often happens is just a hack, patch and release where you release a new image but you don't push that directly to the running VMs. So those running VMs are still out there with tons of vulnerabilities in them. And that's an ugly situation to be in. So next I'll turn it over to Richard to talk about how we deal with some of these problems. Okay. Thank you, Brad. Okay. So when we want to build Validate and Distributor images, to start off with, you want to be able to build your images quickly. You want to be able to do this reliably and repeatedly. So if you do one build, you do the next, you will get a successful build. And assuming you didn't actually specify changes in the content, you will actually get essentially the same image come out of the build a first, second, third, fourth and so forth times. Quick builds are important, of course, because if you're in a short sort of iteration cycle with development, if you have to wait hours for a new image to be built, then you spend a lot of your time, you know, sort of playing sabers with your cubemate. And your manager might think that's a waste of your time. So building for those who've seen the XKCD comic. So validation. So you need to ensure that the image is of high quality and it's secure. So high quality, you need to run regression tests. So you'll have the confidence that when you do actually come out to push this image that it's going to work, it's not going to break your fleet. And for security, you need to run vulnerability scans. Even if your image is good today, tomorrow it might not be good. So you will have to rerun these scans periodically. Then it comes to actually distributing the images. How do you distribute? So conventionally, people who are building image pipelines, they say, okay, well, you know, upload my image to glance. And I declare victory. I'm done for the day. It's five p.m. is quitting time. Everything is good. But then fast forward a year and then you have thousands of tarnished VMs because they haven't been updated. There's stale. There's lots of vulnerabilities. And we're suddenly in a crisis. A new vulnerability has been discovered and it's like P0, high priority. It's code red. And then when all your operations people are going to be spending many hours at night trying to somehow do a live patch on all these systems without breaking your production traffic. So to solve this, one important key is the dominator. This is a technology which I'll talk a bit more about in the coming slides. But this is a mechanism to allow you to actually take a fully built image with your application stack and actually push it out to your fleet and have confidence that the update process is fast and reliable and secure. A key takeaway is that when you're thinking about managing images, you shouldn't think about in terms of releasing images, but you should think in terms of pushing. Because when you think about in the terms of releasing images, you think, yes, okay, upload to glance and I'm finished. So your quality bar that you set for yourself is actually not that high because what's the price of failure? Well, okay, people will spin up new VMs with this image which maybe isn't quite so good. The regression tests don't have as much coverage as you would expect. Okay, people just revert to the old image, not a problem. But if you're actually going to go push the image to all your running systems, okay, then is when the fear factor comes in. That's good because that means you're going to set a high quality bar for yourself because you don't want to screw this up. Okay, so this is how do you only make the vulnerabilities in the live VMs? As I mentioned earlier, you build a new image and then you need to push it out to the machines. So the reality is that vulnerabilities will be discovered and you will need to patch and then the question is how do you do that? You want to be able to do that safely and you want to have confidence in doing it. It's only when you actually have confidence in the quality of your images and the quality of the deployment mechanism that you will actually then change your mindset and get into this mold where you say, okay, we have a vulnerability, we'll build a new image and we'll roll it out. So how do you build these images? You start with some kind of golden baked image and that contains your operating system and your application stack, including all configuration information that's specific to your application. Then you run it through a fully automated testing pipeline. So that will, with a sufficient number of regression tests, you'll have confidence that your image is of quality and it is actually safe to start pushing. Then you actually push the image to all your target machines. You want to use a technology that ensures that the transitions are very fast, robust and complete. So you want a fast transition because when you're changing a machine essentially from one operating system version to another, even if it's a minor upgrade, there's a lot of changes into dependencies. If you look at the package management approach to updating a machine, what's actually happening is a whole bunch of packages that are being updated, their dependencies and their sub-dependencies and so forth all get updated. So there's a large window on the machine where the system is actually in a potentially inconsistent state. You want to narrow that time window because in that time window, if you start something new on the machine, even if it's just some application that's continuously running but spawns off another binary, there's a chance that things will not work because you might have incompatible libraries versus the binary that you're running. So you reduce your risk by narrowing the update window. You want to make sure that a transition is robust, that it either it's not going to work or it worked. And there's no kind of like halfway through updates because that again is the path to lots of damage across the fleet. And you also want to know that when it says it's done, it's done. It's not like, well, you may have to run it a few more times like when you pop it to make sure it actually converges to some sort of approximate solution but that it's done and the job is complete. And so for efficiencies, we actually want to send differences over the network. We don't actually ship entire images because they can be multi-gigabytes, you just want to ship the files which are changed. And the system that implements this pushing is called the dominator. And so a very brief architectural overview of the dominator. The heart of it is right there, the dominator. And so what this does, it continuously polls all the machines in your fleet asking them essentially, what have you got? So each of the machines, these are the subs at the bottom. So those are your end nodes. Those are the machines that are being dominated. And they individually continuously scan the local file systems. They run a SCR 512, check some scan. And so they build up essentially, it's a representation of the file system state. And then the dominator will poll all those to get the file system state, compares the file system state with the image that each machine is supposed to have. So the name of the image comes from your machine database. And so the denominator reads the machine database say, okay, what is the list of machines I should scan or poll? And what image should these machines have? It pulls the essentially the metadata that the file system representation for the image from the image server compares what the sub has to what the what image it's supposed to have. If there are any deviations, then it instructs the sub to fetch the files from the image server, and then perform an update. So that's the basic architecture. At the top line, you'll see file generators. These are computed files. So if you want to have dynamically generated content, say that we have a file on the file system, which changes depending on which host name it is, what MDB attributes it has, then file generators can be used to insert dynamic content. Okay, and now I'll hand it over to Tim. Thank you, Richard. Hello. Is this on? Yes, it is. Cool. So currently in glance, that was a step. There are two explicit visibility values. There was public and there's private. But of course, you can add members to the private. So it's also kind of shared. Currently in flight, I've got the Community Images patch, it's targeting Okada, and it does two things. One is it makes shared its own explicit value. And two, it actually adds the new community value. And how these work, you have the public images, it's exactly the same as it used to be. Everybody can access the image and it appears in everybody's default image list. For shared, it's functionally identical to the way that private is today. If it's shared and there's no members, then it's only visible to the owner, and it only shows up in the owner's image list. Once you add a member, then that member can access the image. And once the member status becomes accepted, then the image will show up in their default image list. And finally, you have the new value of community, which is a different way of sharing. It's kind of the opposite where everybody can access a community image, but it doesn't show up for anybody's image list. You have to explicitly request to list the community images, otherwise they're invisible. And what we're recommending is that your golden images, the ones that your admins tightly control, they keep curated, those should be the only public images. And then when you have users who want to publish their own images, that's where the community value comes in. They should use community, they can still do what they want to do, but you can educate your users. The difference is you can trust the public ones because we're on top of it. The community is buyer-beware. And now for an example of how this might look in Horizon, I'll turn it over to Brad. And so to give you a visual on how this is all going to look, and just a reminder, this community images feature in Glantz has not made it into Newton, but it's planned to make it into Ocata. So I'm talking about some things that should make it into the next release of OpenStack. And to give you an idea of just what it looks like, we do have this implemented in our internal environment. And we can share that source code if it's something that you'd like to see. So just let us know directly if you'd like to see that. But we have it implemented on the Glantz side, and then this is what our screens look like in Horizon, which also supports it. So here we're logged into Horizon, and this is probably familiar to a lot of you. You'll see that this project has no images that it owns. It's got an image that's shared with it already, and this is based on how image sharing works currently. Nothing new there. Then they have public images, and then we have this new tab for the community list. And so if we go over to the public tab, you'll see we have this list of public images, which again are the ones that are blessed by the Cloud Admin, and users who are looking at this know that these are good images. These have the latest security updates, the latest bug fixes, and so basically they're ready for any kind of use. Then if they go to the community list, we keep a strong separation between these two. Anyone in the Cloud can create a community image, and so whatever is here is not necessarily to be trusted. And by doing that, we reduce the amount of damage that an individual actor could do, where we have this expectation that if they put up a community image, then we keep that out of the public list. And so for one thing, we don't spam up the public list with all these random images, but for another thing, that we have a clear separation of what's trusted and what's not trusted. And so the next thing I'll talk about is a feature that we've implemented that will probably make it into OpenStack community later, but is not planned at this point. But if a user wanted to work with a community image, and they wanted to basically bookmark it, we do have this new bookmark image button, or can be done via the CLI as well. And so if they bookmark that image, then you'll see they've got a new image that shows up in the shared list. And so this is where they can, anytime they request their shared list, they'll see this image come up. And at this point, they can launch the image, boot instances with it, otherwise work with it. And if they've done everything they want to do with that, and want to get it out of their shared list, then they can hit the Remove Bookmark button. And as expected, no longer in the shared list. Of course, it's still in the community list, so if they decide later they want to go and work with that again, they can always add it back, or just launch it directly without bookmarking it. But hopefully this gives you an idea of what community images is about, and how it could help you with your image management strategy in general. And next, I'll turn it back to Richard for info on hybrid clouds. Thank you. OK, so Mathek has built a private cloud based on OpenStack. But we're also now consuming public clouds, a particular Amazon, but we're also looking at other public cloud providers. And that's because they have basically presence where it would take us a long time to add data centers, and it would be perhaps more expensive than it's really worth doing it given the revenue we'd expect in those regions. So we have to have a hybrid cloud strategy. And so part of that is how do you build images? So you can build different images for different clouds, and you can tune them each individually and have different content. But that doesn't give you a good hybrid experience, a real hybrid experience is where you essentially have the same image in all your environments, whether it be OpenStack, or AWS, or GCP, or Azure. So this is a conceptual diagram of how to do image building and deployments, I should say management, in a hybrid cloud environment. So you start off with templates, essentially, in Git, which describe the content of your image, which is think of essentially as a list of packages that you want, so that can be for a base image plus the specific applications that you care about for your stack. Those are in Git, then you have a trigger, and when there's a new commit put in, then the content builder kicks off and actually builds essentially a file system image, an almost complete file system image with all your content. So it's a file system tree. It's a compressed tar file, basically. And then that feeds into a number of target or environment specific builders. So there's one for Ironic, if you may want to run on bare metal, so you push it into the Ironic builder, it puts the special wrapping around it, it can upload it to Glantz, so you have an Ironic image. You have another one for OpenStack virtual machines, again, there's a builder and it goes into Glantz for Amazon, you have an AMI builder and then you push that into Amazon machine image list, it might be EBS backed, or it might be InstantSpect, and you can also push into the Donmator system if you actually want to do live updates on machines. So you see there's a lot of commonality with these different environments that you have a number of tests run. You have blocking tests, and a blocking test, the definition is that if the test doesn't pass, or it doesn't complete, then the image release is automatically blocked. So these are your non-flaky tests, and your must-have tests, where you kind of say, look, these are things that must always be okay for me to even consider finishing the release of this image. And then you have non-blocking tests. And a good example of this is vulnerability and compliance scans. We use a product like Qolus for our vulnerability scans, and it can generate a lot of false positives. So there's basically no way you want to hold up an image release because you might get some false positives there. This is really something that the human has to go in and look at, because it's hard to automatically determine whether it's a pass or fail, and there's also a judgment called whether or not this is severe enough vulnerability to hold up the release train. So those are non-blocking tests. But the basic pattern is the same in all these cases where you have blocking tests. If they pass, you release, notify, email, slack channel updates, whatever plugins you might decide a need for notification. And then hours, days later, depending on the nature of the tests or the scans that are being done in images, then you have a notification. So this is a conceptual diagram of how to do hybrid image management sort of right by having as much as possible a seamless experience. So the fact that you may be running on an Amazon VM or an OpenStack VM should be mostly hidden from you. So we have essentially the same content in both those environments. OK, so now for a little demonstration of our image pipeline. I need to escape a bit out of this and switch here. So we are using Spinnaker, and I have to log back in. This is security. And this is how you make it not quite as painful. OK, so yes, some of you may have heard of Spinnaker. It's essentially an orchestration engine that was developed at Netflix and has become quite popular. We only use a subset of its feature set, which is pipelines, because we have a complicated pipeline for building images. And this is actually still in development. We were sort of building out to new environments. We're combining the different build pipelines into a single unified pipeline. But that's sort of a work in progress. But to give you an idea of the complexity of these pipelines, there's about 25 different stages. So here we see a configuration stage at the beginning. You can see my mouse? OK, yes. Then we launch a seed machine so we can actually upload the image into Amazon. We build the image in parallel. We put it to Amazon. You spin up a VM. Then you do testing. OK, so these would be the blocking tests. Then at this stage, we have a manual testing as well. Human can actually, if they want, make a decision. Logging to the machine, see if things still look OK. As we get more confidence with their automated tests, we may just turn off the manual testing so we don't need to do this anymore. Update the version number, a few things like that. We make updates in Confluence where we can register the image, details about the image build, any logs, and so forth. And then we have pushes into different accounts that we have in AWS. And then any follow-up manual actions that we may want to take, like registering the images with other systems, and then further updates when the image is actually released, and then notifications. So this is just a quick walkthrough to give the idea of there's actually a lot of steps involved in doing just an image build and releasing it, let alone actually pushing it to machines. And an example of a recently completed image, you can see here all the different stages, the different pipelines take. And actually one of our pain points right now is actually copying images from one Amazon account into another. And so we're actually looking at optimizing that. So this is in development, but it does follow the basic principle that I outlined in the other slide. And going back to the presentation, OK, so next was, I lost the spot. OK, yes, demoing the Dominator. OK, so that's the last demo. OK, so this is the system I was talking about earlier. So again, the Dominator is continually scanning all the subs, the subs are scanning the local file systems. If there's any changes, the Dominator will then correct them. So an image deployment becomes flipping which image is the required image, the image that should be on the machine. That is all an image push is, as far as the Dominator consistency is concerned. Whether it's correcting random changes on the machine or is actually pushing out a new image, at the bottom level it's all the same. It's what is the image it's supposed to have now? And if it's not the same, make it so. And so this is really difficult to demonstrate at a really large scale, but I thought a good demonstration is to show how well it works with restoring a system. So you may notice here on the right hand side, this is the command of fear. If you would do this on a normal system, you've hosed yourself, right? It's kind of terrible. But let's go ahead and do it. And now I want to see how big my file system is. I can't even do that anymore. I can't even show you that the thing is almost empty because all my commands are gone. So if I now go to the Dominator and just watch the dashboard, it's the bottom one. And you see it's fetching, which means it's already picked up that, oh, OK, things are wrong. There's missing files. So it's told that the sub, OK, go fetch the files from the image server. And if I click again, oh, OK, sub not ready. What this means, it's actually finished fetching. It's done the update. It's just doing a restart and sort of re-scanning its file system. And any second now. OK, now it's just doing a double check. Is everything the way it's supposed to be? A few more seconds. OK, it's synced, which means it's finished. And now if I do this, I have a file system back again. Oh, I've got a file system again. So that took a little under a minute. For that 2.2 gigabytes of contents that had to be fetched and restored. So that's just a simple demonstration of the domino. So this is technology that's good for pushing images, whether it's pushing into VMs with your full application stack, or if you want to actually push hypervisors. So you've actually got a fleet of compute nodes. And you want to manage those, keep them all in sync. Domino is an excellent system for doing that. If you want to talk to me about that afterwards, that's fine. I also have three copies of the design documents, and 20 copies each of an architectural overview and a fact sheet if you're just kind of curious about it. It's all open source. It's on GitHub. The links are on those pages. So feel free to pick up a copy if you're interested. And now, going back to the presentation, I think we are finished. Yes. So thank you. Any questions? Does anyone have any questions? Yes, so they're in the MDB. So the MDB is the source of truth of your fleet. MDB, some people call the CMDB, Configuration Management Database. But it's a manifest of all the machines in your fleet. And what images they should have, as well as any other metadata that may be interesting, like who owns the machine, what role it's meant for, those sort of things. Yes. So the question is, how do you actually find out what the diffs are? So the subs scan a local file system. So it builds up, essentially, a map of what the file system looks like. So it's recording the check sums of every file. And the images are essentially, so there's the objects which actually files themselves. And then there's a representation which shows the check sums for every file in an image. And then the dominator compares, it pulls the subs, says, OK, tell me, what is the state your file system? And it compares that to the image. And if there's any differences, then it says, you need to go fetch these files and make those updates. Does that explain it? What's that? Oh, you mean from the image server? So when you upload an image to the image server, it's actually to compress tar file. That's the format you just upload. But internally, the image server breaks that apart into a separate object for every unique file. So that's actually something I didn't mention. If you could upload hundreds or thousands of images, and if they're all essentially the same, like the same set of files, it's very cheap. Because it duplicates every unique file, because it does a SHA-512 check sum, and it's indexed into the object store. So imagine if you've built a base image, and then people build derivative images on top of that with someone has Apache, someone has MySQL, or whatever. The vast bulk of those images are actually the same if you actually were to duplicate all the files. So in terms of storing on the image server, you're only storing the extra images. OK, you have to store Apache again, and you have to store MySQL. But the base image is there already. There's nothing new. That makes sense? OK, so the sub, sub D. So there's an agent that runs on every node, and that does a continuous loop over the file system, computing the check sums of every file. Yeah, the primary goal here is to keep machines in sync and to have a safe deployment and update mechanism. Now you do get, because you are always continuously scanning the check sums of the files on the file system, you can actually leverage that and get like a tripwire type system on top of that. It's maybe not as good as doing TPM and so forth, but yes, they would, well, yes, but they would need to subvert sub D. They'd need to put in a replacement, because if you see that it suddenly stops talking to you, then you know there's a problem. Everything has signed, so it's for all the communications. So you know that it's sub D that's fetching files on the image server, and that's who you're talking to. Although, actually, it kind of matters less that you're talking to sub D or not, because really your goal is kind of drive the machine into compliance and see if there's any deviations. It's not meant to be a full-on intrusion detection system, but it does for free essentially kind of give you a large segment, so like a fairly large practical segment of that. That'd be interesting. Yeah, I haven't sort of thought about that. It does seem somewhat out of scope, but when you're talking about like machine keys, it's certainly the case that with computed files, you can actually, a file generator can actually generate certs, and this is actually, dominates an effective way of actually getting unique machine certs onto every machine. Could extend that to sort of with a TPM. You do have the issue of like, how do you kind of get that initial trust, right? At some point, you're sort of saying, you know, it matters less that that initial trust, you kind of, you trust the basic underlying environment. You're looking really for like, after you've established that initial trust, is it, you know, does it appear to be breaking down or subverted? No, well, so the images are actually, well, they're not signed, but the metadata does show who uploaded it, right? And it depends exactly how you use signing. It can also be damaging. One of the operational problems with signing is what happens if the signature becomes invalid, okay? And that's a dangerous thing. And there can be cases where, say, well, the person who uploaded it, they signed the image. They're no longer with the company, their certs become invalid. Suddenly, you can't use that image, and that might break production, or people can't launch, and that's a, my view is it's better to have certainty that authorized people upload the images, right? And that you can trust the server, but the actual content, I see less value, especially because every file is just, you know, it's just checksum, right? So you can actually see an object that it's essentially valid, okay, because it's so signed, in a sense, because it's indexed by its checksum. So you can't just tap with files, because if you do, if you were to actually tap with the files on the image server, then when a sub downloaded it, it would do that, and then it can see, well, the checksums doesn't match. There's a problem here. Yeah, I guess I'm not. Yes, so the dominator will detect that there's a difference, right? Because it's after sub what you have, it kind of compares with the image server, and then it tells the sub to go fetch whatever files are missing, okay, whatever files it doesn't have from the image server, and it comes back. So the thickness of the arrows indicates essentially the amount of data that's flowing. So you see the largest amount is from the image server to the subs, because that's the bulk of your data. No, no, you don't need to reset the VM, unless you've changed the kernel, okay, but you've just changed Apache or MySQL, then all you need to do is restart those services. So part of the image, so an image is basically three things. It's a compressed tar file with your content. It's a filter file which says, look, these are the files just don't touch, like ETC, FSTab, leave that alone because it's unique per machine. Although you could perhaps in certain environments actually compute that because you might have the same pattern on every machine, but that's getting, you could do that, but it's probably simple not to do that. And then there's triggers, which is a set of essentially regular expression matches on files, so if a trigger, if it matches, if a file changes, that matches the regular expressions that are in a particular trigger, then it says restart this service. And that can include the kernel, but most of the time it won't be the kernel that's changing, it's just some, a couple of apps. And so those servers will restart, but so the beauty of this is that you can do live pushes without having to reboot machines. So you actually get a very lightweight update. This guy was actually first. Okay, so the dominator sense, this thin line here is that's the pull request, right? It's an RPC call. And then coming back is the response, it's more data, the request is tiny. And then, and if there are changes to be made, it sends an update RPC request, which gives, here's a list of files you need to fetch and then make changes to. Well, fetch and then move into these places in the file system. And then the sub contacts the image server and says, hey, I need this set of objects. And then the image server responds with the objects. Similarly, dominator sends a small request to MDB saying, like, what's the manifest of machines, including their metadata? And the MDB sends it back, a larger payload, which is the manifest. Dominator says the image server, what's the list of images that exist? And it gets back the images and the falseness representation of the images. So the list of all the check sums and metadata protections and so forth. And then the file generators, if you have computed files, then the dominator on demand will query the file generator saying, hey, this file path for this machine, what's the data? And then the generator sends back the information and then the dominator actually directly pushed that information to the sub. It doesn't go through the image server because it's dynamic. No, that's part of the dominator. So all this is part of the dominator system and that's its own system, yeah. It doesn't. So the backend storage at this point is just a local file system. I have in the plan at some point to use an object storage system if needed, but it's gonna be, at least for our use, I think for most people use it's like, hundreds of gigabytes are actually cheap. Well, actually, I should say hundreds of terabytes are actually cheap these days. And so that's actually room for plenty of images. So at this point, there's no great need for using, to implement an object storage back into like a separate one, like Swift or S3 or whatever. The image servers do have full applications. So in our deployment, we have essentially a master image server where you upload the images to and then there's a whole bunch of replicas. There's basically one per region. Okay, and that's simply for latency reasons. But because of that, I have free replication, right? I don't need to do anything more magical. And if one of the images servers falls over in a particular region, spin up another one, it just sinks up to the master. It fetches the content. I don't even have to, you know, lift a finger. If the master falls over, I'll just set up another one and temporarily point it at one of the slaves. And again, I have all the data. Yes, that's handled. So the images, hard links and some sim links are all represented. The only thing that isn't is sockets. Because it's actually a network call really to create them, it's not a file system. It's kind of just looks in the file system. Everything else. Pipes, sim links, hard links. So if you were to add a new hard link to the system, Dominated kind of sees that. And it doesn't actually need to fetch the file. It just actually does the hard link on the sub. And it's like, oh, okay, the content's already there. Dominated is not a CMDB, right? It's basically an enforcement system. For, say, this is what you should have and it enforces that. It always keeps systems in compliance. Your MDB, I mean, that part there is your CMDB, right? So I think that's what you're talking about. So Dominated has a simple little driver hook-in to some simple MDBs like you can have, just have a JSON file if you wanted it. That's all you have. It has a driver for sort of what we have like an intermediate layer for OpenStack deployment. So we kind of use that. We also have sort of direct driver for like AWS. It's, Dominated itself is agnostic about the underlying cloud that it's on, because it also isn't meant to be able to manage a cloud, right? It can be over for applications in VMs, but it can also be used for managing the cloud itself. It's kind of a foundational technology. If there's no more questions, thank you very much for coming. Okay. Okay. And there's papers there if you wanna grab some.