 I think we can get started. Yeah. Hello, folks. My name is Neil Gompi. I was here on the stage just a little bit ago, but I have a new companion this time, David Duncan. Come on, introduce yourself. Hey, everybody, I'm David Duncan. I'm a, well, in the Fedora team, I work on the Cloud Edition and I've spent a lot of time working on that in my professional life. I'm a partner solutions architect and I work on cloud images and cloud solutions for Amazon. Yeah, and I actually also wind up occasionally working with him professionally too because I work at Red Hat as a senior black belt on managed OpenShift solutions and that means that I get to interact with him when working on Red Hat OpenShift service on AWS or Rosa as everyone loves to call it. So we're here to talk about, with you, all of you, this is a participatory panel thing about cloud images and getting people to be able to make their own what kind of process type stuff and what we hope to achieve and what you would like to see from Fedora Cloud to help make your lives better running Fedora in the Cloud. I think one of the things too that we're curious about is where in your workflow might you have a good space for something like a bespoke image. We spend a lot of time looking at what we can do to create images that are small, minimalized and provide a generic base, but that's just the tip of the iceberg when it comes to how the Cloud experience works. You want to be able to generate, manifest and create application space that is specific to your workloads. There are many things that we do in our daily experience that we don't want to do twice effectively. Just to give you a great example, the CoreOS images that are created for building out the OpenShift services for all of the public Cloud providers is in fact not just CoreOS. It is CoreOS and a series of tools and technologies that are necessary for deployment so that we can get that deployment model down or decrease the amount of time to a ready state. With things like Red Hat OpenShift service on AWS or if you're running maybe as we were talking about yesterday in our talk about Fedora Cloud KDE, these are scaffolds of workloads that we build to provide something useful and differentiated on the Cloud that you will be enriched with. We want to help you be successful in the Cloud doing these kinds of things and doing cool stuff with it, and we want to know what you guys are doing in the Cloud. Would this stuff to help us or to help them? Yeah, just as important is what documentation do you think is significant to make the next generation of your experience with the Cloud Edition better, right? It obviously doesn't have to run on a public Cloud or it doesn't have to run on some managed Cloud. It can run on your desktop and we expect that the experience to be just as important, just as critical for others, and with some of the things that you can do now with Cloud in it and in Vert install and... Yeah, there's a lot of ways to... Like some of the things that you might not be aware that the Fedora Cloud Working Group actually maintains is the vagrant images are ours, right? Like we take care of the vagrant work stuff. I think we recently started doing what cube vert images or something like that, and we're also doing things around adding it to additional Cloud providers. We just added Azure recently and we're looking at Oracle Cloud and a couple of other providers just to kind of round out the table of things. And we've always had the network of the common VPSes like your Linodes and DigitalOceans and things like that that we've also been having our offering on for many years. So anyone got something? Anyone? Yeah, looks like that guy. Hey, how are you doing? My name is Brian and I work quite a bit with upstream cube vert. So I actually came here today to ask you about if the Cloudsic would have an interest in publishing images for cube vert VMs basically. Yeah, I think we have the definitions for making them, but we don't know where to put them. Yeah, so I think at the moment we're building our own kind of container disk images which are basically just kind of wrapping. We say Fedora Cloud, CentOS Cloud images. So they rely on Cloud in it and then we store them in Quay as container images. Sure, sure. So yeah, it's great to hear that you're interested in actually developing some of these images because that's fantastic. Like our goal here is to provide all the deliverables that are needed for people to have an end-to-end positive experience using Fedora technologies in the Cloud development and production process. So whether that's on your computer or actually in your infrastructure or somewhere in between, like we want to make that experience good and help you succeed in that front. Yeah, so one of the things that made us excited to talk about this today was that we wanted to talk about what tools are available, right? And so a couple of the things that are available right now is just being able to create the build definitions inside of Koji and then build those in Koji. And we were worried that there are a lot of people who don't know, or in our community, right? Who don't know that they in fact have that ability. Yeah, just today, right? To generate those images in a way that is consistent with their expectations. I know Adam knows about it because... Because he does it. Yeah. A lot. We appreciate you, Adam. Yeah. And well, but he was also very vocal about learning and learning how to do it and there's probably, if you were to just wander back in the infra lists, you'd probably find some great conversation around that process from him. But we wanted to make sure that everyone knows that the tools are there. There are some other, there are tools for uploading to cloud providers and whatnot that are there. And while we don't expect that we'll be doing that for everyone individually, we expect that if there's an image that we need to have in a place that makes it available for specific cloud providers that that's something that we would manage and maintain for you. And then, obviously OSBuild is out there to help you do that in your own independent, like in your own individual accounts or whatnot. Yeah, so if you're using image builder, composer, whatever, it's gone by a bunch of different names. All the things that OSBuild is the back end of. You have some way of producing these things in your own environment. We also, I mean the tools that we're using in Fedora Cloud Day, we're in the middle of a transition moving away from some of our legacy tools that are kind of specific, that can only kind of run effectively in Fedora infrastructure to stuff that people can run anywhere. So whether it's OSBuild or in some images we'll be, we're looking at using Kiwi for. And the idea is that we wanna move to a model where everything that we're making is something you can make too. Because we know that as much as we wanna serve and offer all these wide variety of things, the key power of the cloud is being able to build for yourself, tailoring it to your experience. And we want to make that a core tenant of what we are doing, we want everything that we're making to be something you can take and make your own. Yeah, there's a lot of times where we get feedback from the cloud providers about security or some sort of vulnerability that they don't wanna see in the machine image. As such, they wanna see it, they wanna see a new image deployed and then that image has, the governance around that is really complicated because we really wanna just put out one image and then put out an updates associated with it. But their process doesn't fit that. Fast forward to the customer experience, right? Then you start, they start to have a deeper impact on their own customers. Their own customers think that, oh, okay, well now we know that our process is to update our machine images and then, or just images, depending on who it is. And then the updates themselves are there for us to be able to make something that is more consistent. But then to make it like once every two weeks or so. So we know that people who are working on cloud solutions are also being, have a high amount of pressure to issue new images instead of just creating a patch management model for the instances that they create from those, right? Yeah, because if you're in the more ephemeral world and you have the ability to subscribe to a feed to get the latest image every time you provision, then you probably wanna take advantage of that. But on the flip side, if you're one that's starting from this and you have a machine environment that's sticking around for some time and you're gonna keep it and life cycle it, then you would favor being able to do patch management and things like that. So being able to handle both types of workflows is something that we try very hard to also do. And that kinda comes back to, we're trying to make our tools easier and simpler and more flexible to be able to handle these divergent use models because they're both valid. And we wanna make sure that people are happy using these things in the cloud locally, wherever. Yeah, the Kiwi decision was one that came about because we were looking at how to create composable image definitions and it was just a fantastic way to do it. We could create any, basically any kind of image we wanted from the WSL all the way out. And so we were, we've been pushing in that direction. On the other hand, the uploads and configuration and like post config, we started off trying to determine how we could build our own tools but then it looked like Ansible actually does most of what we needed to do. So we turned our, we turned the cloud team has kinda turned towards maintaining those, the software development kits in Fedora so that we can have full support for the, for the Ansible collections that support them. And ideally, like these are things people can take and use in their own environments. One of the, you know, I spent eight years as a DevOps and doing DevOps type stuff. And, you know, one of the things that made my job challenging was that I saw all these interesting things and ways that people were doing stuff. And things that I could have loved to use in my own environment but they didn't make it successful or available for me to be able to easily take it, adapt it and extend it for my own use cases. Well, it's still not easy to like integrate with the infrastructure. Yeah, I know, but I'm speaking aspirationally. Yeah, aspirationally. So like this is something that we, like David and I have been like working really hard on trying to like make that a guiding point for us when we're making our new decisions about how things are supposed to work and what we're producing. But it also allows us to put limits on what we are going to do. And for example, we're probably not gonna make a cloud variant of literally everything that exists in Fedora. We don't need to, that's... But we definitely do wanna make a, you know, cloud variants that are gonna do things that we think are gonna segue into more of a Fedora experience. So there are lots of people who come to, one of the things that I talked about yesterday actually in the cloud discussion was how we looked at the, like cloud nine, right? Cloud nine and inside of Amazon doesn't have a Fedora based image, but it could. And so, you know, it's for us, that's something that we want, we are looking at how we can produce. It's, I mean, obviously backlogged on other things, like getting rid of Python 2.7. But we're, you know, it's a, the, you know, our goal is to see how those things can fit into the process that, you know, processes that users are already dealing with and say, hey, I couldn't use Fedora as a foundation for this instead of just using, you know, some other distribution. Yeah, and like kind of riffing off of that, a lot of this was, well, you know, CIS admins that are like in an emergency scenario, they're away from their main computer station, you know, being able to spin up, like say a cloud KD desktop that's got like, you know, their open shift and Rosa command line utilities, their cube definitions, their access to their version control systems, all that push button provision so that they can do emergency work in the environment that they can actually be comfortable with. I mean, I don't know if any of you have been a CIS admin in the mobile phone on demand era, like I've been and, but that sucks. Typing into a terminal and having to like do things from your, from a crappy phone console that's in Android or iOS, compared to like being able to access like a remote desktop from a web browser, is yeah, I would take the remote desktop in a web browser because then I'd have access to all the tools that I actually need to function. And that's the kind of stuff that, you know, we're looking at to like enable cloud experiences, but also we want to have a framework where like, let's say for example, you know, the Python classroom, the Python state who maintains the Python classroom lab for Fedora, wants to also have a cloud variant. They are perfectly capable of reusing our framework to be able to provide that on their own as part of their lab. We don't have to do it for them. No. We would give them the tools and the capability to just do it and make it part of their own deliverables. We want to be able to provide that kind of flexibility to people in the same way so that they can actually have their own tools for success. Yeah, but I mean, I think we think other, a lot of other groups are feeling that they see that segue and we want to help them in that way. Oh, sorry. Yeah, we want to hear, you know, we want, we think that there are other members of other teams that are looking at things that they want to be able to stand up quickly. They want to verify, you know, they may be working on windbind or they may be working on, you know, key cloak. But there's a lot of things out there that you want to just be able to bring up, look at and drop and that process of building those machine images and making sure that you have the updates and that you have sort of a, you know, basically a step, you know, kind of a step function style approach to ensuring that you have what you need. We want to help make that a reality for lots of other groups. Maybe you want to work on open QA on a virtual machine. Right? Perhaps you want to be able to make your own just environment to do all this stuff just with pushing a button in the cloud. Any other questions? Yeah, just keep it going. Sorry, so you mentioned that members can build their own images. Is there a good starting point for that? Is there a Clouds Hick Docks page or is there a good starting point for that? I don't think we're quite there yet. We're trying to, I mean, that's what we're doing. That's part of our goals with the rework of our image build stuff. In fact, that was a little bit of what I was hoping to get a springboard for today is to see where the interest is and how we want to frame that documentation. Yeah, so like part of the reason we did the CloudKD talk yesterday and why we're doing this panel today is we wanted to have some interest and learn from y'all, like what kind of stuff would you be, that you would find compelling and interesting to do with Fedora Cloud stuff? And because we have some ideas of what we want to do and I think we're on a good path, but we also want to hear from y'all to see like what are you interested in? What are you looking for? What kind of questions do you have about like doing stuff in with Fedora in the Cloud and that sort of thing? Yeah, I mean, I'll be honest, the Kuvert thing is super exciting to me because there's two aspects of that. So one, just generally Kuvert is a great way to create a great modernization strategy. And two, it segues into some of the work around lightweight virtual machine environments that we have not gotten an opportunity to work on just yet, but porting to Firecracker and things like that is a big deal. Also, congratulations, Kuvert, for reaching 1.0. You finally got it, you finally did it. Yeah, we finally got there, yeah. You also mentioned Rosa on the site, I was just wondering, is container native virtualization included in Rosa? I don't think so. Can we, I don't, yeah, I don't think it's a thing. Yeah, we, so there's a couple. It's a him question for whether we'll ever do it though. So whether or not it'll be supported, that's kind of a question, that's questionable, but like, there are flags in the metal instances where we're still working on the metal. Yeah, like most of the reason why we can't do it is because the instance types we support literally don't let us. And also it just, to some level, it feels weird, right? I mean, I think there are reasons where I think it would make sense, but like, you could also just EC2, and so like, but I also can get the idea of like, you wanna have the abstraction and the same API everywhere to be able to do all the things, but. Well, you wanna do your testing and get that nested vert. And then there's that, and I mean, like part of the, so this is gonna step back into peeling back a little curtains about Fedora cloud stuff. Part of the issue that makes Fedora cloud images currently today so kind of painful is because the tools currently require us to boot up into a virtual machine to produce the cloud images, which, and the cloud and the virtual machine boot up process is kind of special, and that makes it difficult for people to be able to replicate our processes on their own. And I wanna move away from processes that are so complicated that like someone who's like just getting started fresh into the stuff and is excited, doesn't get so discouraged that they wanna flame right out of the whole thing. I mean, you can flatten a kickstart now and just figure out what's in the process, but that's like, that's just one way to make a virtual machine. I mean, you could literally make it in the same way that you make an open VZ, or made an open VZ image. I hope you're not making open VZ in the winners today. Hey. Yeah. In the same way that we did a gazillion years ago and that's not unusual. I know the Amazon Linux team, when they build an image, it's basically just, you know, it's a tar file. Right, when you're finished. And I don't think that, you know, this is also one of those places where there's more than one way to skin this cat. And I think that that's an important part of what I think is interesting for us is that we don't necessarily have to do this in a way that is consistent with the expectations of the one tool. You know, if we decide or determine that there's something that we wanna do here that's different, we can be versatile. And... As long as it's maintainable. Because what we don't wanna wind up as in a situation where we have a litany of things where we don't actually know how we can keep them going. Oh, agreed. But you know, if we're helping somebody Sure. in the process, I don't want it to be constrained to like, oh, well, they're using Kiwi, so. No, that's not what I mean, right? Like, if a tool is kind of central to building the thing that they want or need and we need to, and as long as it's straightforward enough to integrate and support, I don't have a problem with it. Yeah. Yeah, so there's a lot of ways, there's a lot of different directions you can go with how you make your, you know, how you create your, or how your process, whether or not that process is consistent with the one that we have. And I think, I don't think that that should be something that would concern, you know, concern you. The most, I mean, to me, the most helpful thing is knowing that you can pull this, you can pull together this configuration, you can push that into the utility, the tools that we have already in place, you can get your build alerts in the same way that you would get them with any other Fedora projects. And, you know, you can throw them over to Adam for testing. Or Adam might just do it anyway and then tell you he did it. That's right. Any other questions? Any other? Comments? Comments, yeah. There we go. This is more looking farther into the future, but one of the things that I'd like to do eventually is more, or working with like GPU instances in Fedora. And that's predicated on getting all of the stuff packaged in Fedora so that it works. But is that something, assuming we can get everything to work, is that something you think would be reasonable? Yes, actually. So like, we briefly mentioned this yesterday during our Cloud KDE talk, like GPU accelerated instances and being able to support them properly in Fedora Cloud is absolutely on our roadmap of requirements. Most of the public cloud providers also have the ability to distribute the NVIDIA controller drivers so we can, we can, we can cheat. Piggyback off of that, yeah. I guess the, I don't know, one of the problems I'm seeing is where we're starting is with AMD, not with NVIDIA. That's fine too. And I know of one cloud provider who has one instance that has an AMD GPU in it. And it's not current. It's a couple of generations old. Better than nothing, but, like I was just saying, all the cloud providers do have NVIDIA. There's, Amazon has one instance that has an AMD GPU in it and I have not been able to find a single other cloud provider that has an AMD GPU. Well then, props to AWS for having AMD Radeon GPUs. It's pretty good. Yeah, newer ones would be awesome. Yeah, more new ones would be awesome. Because it's not technically supported by RockM. There's questions. I mean, it's not on AMD's official list. Whether it works or not is a different question. It's just it's not on their current official. These are the GPUs we support and then the one that's in AWS is different. Well, I suspect, you know, I'm not gonna speak for David, but at least from the Fedora Cloud perspective, I think if we start having those toolkits in place and being able to have instances, have images with that stuff preloaded, that instances, when they're provisioned on those instances, that they can just take advantage of them. I think they're gonna see a different dynamic and start building up for it. But what do you wanna talk about with the Amazon side? Well, yeah, I mean, so from the Amazon side, we have our own crafted drivers for those. The way that works from the Amazon perspective is the drivers are produced by AMD for the AWS environment and then they're maintained in an S3 location for the instances. But they're available and they're easy for us to distribute and there is a rail workstation that was created for specifically that reason to have two things incorporated into the instance. One was the GPU drivers for both standard GPU workloads and general purpose GPU workloads. And then to have on top of that the nice DCV VDI solution in place so the server's already there, you just connect to it with the client and it's basically like running it where you are. And the reason we wanted to do the KDE workstation is because we don't have an upstream kind of experience for people to have that same experience to iterate on for some of the same reasons. So we're really super interested in making sure that customers have that or users have that available to them because the user experience around machine learning on Fedora is a story that we very much need to tell. Yeah, that was part of one of my talks yesterday. It was what it is and hopefully how we're going to fix it. I am for my perspective, I am super excited about seeing the ability to use GPU compute in Fedora out of the game. I'm seeing the Rockham stack get integrated in and being able to start doing that kind of thing. I hope that what it will do is encourage more GPU compute workloads, whether it's AIML or something else. I mean, render farms are another example of GPU compute workloads that will use AMD GPUs with open drivers and an open software stack that everyone gets to exercise the flexibility and freedom that they need to be able to do what they want and not what somebody else prescribes of them. And just like you're doing, our goal is to build this narrative so that people don't forget or don't think that just because there is another narrative, right, that this one is absent, we want to make sure that this one is very clear and in their space. So for me, the machine learning, I mean, if you'll let me soapbox for just a minute. The machine learning conversation is one that I think is really hard because first off, most of the applications that make this space or work in this space are impossible to package, right? I mean, we've had that conversation several times. And then the way that others have approached this has been to take all of that stuff that they can't really figure out how to package, just throw it into slash opt, create an image and then just make that available. And that's- How kind of you? It's a scary, you know, to me, that's not, it's not a helpful model, right? Like there's no way to do updates. There's no, there's a consistency there that maybe I might like as a scientist, but the real, but this goes back to our like the whole idea of developing images for yourself. We wanna have those images in, you know, like in my project, in my account, right? I wanna have those images. But that doesn't necessarily mean that I wanna find them out there in the world, you know, in an incredibly vulnerable state for, you know, for super long or I wanna, you know, I wanna find out that they disappear at some amazingly fast rate and makes it impossible for me to have a steady workflow. So, I mean, I would love for us to try and figure out how we can do that, you know, Akash Deep and his work on the Neuro Fedora, I have thought was a great entryway into this space. And, but it's not, you know, I mean, Open Data Hub, right, was a really important part of that process around the OpenShift experience and, you know, we have, I mean, I have built out machine sets, you know, that were GPU enabled for doing machine learning workloads and in the OpenShift space. But really a lot of that work can be done on a single machine in a very, you know, sort of a limited space. And so you can bring up one instance, create, you know, do whatever it is that you're doing against whatever data lake it is that you're accessing and then turn that off. And helping people to understand how to do that means giving them the tools that they need to do the work. Just, I want to follow up quickly on something that you had said a while back to make sure I understood. So it is not a requirement for Amazon to have the driver supplied by AMD, it's just something that was enabled. So if we had something in Fedora that was completely upstream and didn't have AMD's binaries, that is an option for... Yeah, that's totally an option. Just wanted to make sure I understood correctly. Yeah, you can use the open driver stack in there if it's pre-loaded. Amazon just happens to have a bundle that you can just source it and install if you don't have anything. Okay. Yeah, without getting into trouble. No, that's an important part. Anybody else? Then I'll just ask, I'll ask Isaac's question. No, we don't have the RISC-5 images. Just yet. Oh, you know, I have expected someone gonna ask it. Like, I know it's so prepared. He's leading the witness here, so I'm gonna ask two questions. You just asked one of the questions, but the real question is, when will Amazon support RISC-5 in their public cloud? I'll defer to people who are, you know, more capable of answering that question. Because that's the chicken and egg. So when Amazon declare that they're gonna do that, then we can say, okay, so when are we gonna have Fedora images to sit on the RISC-5 boxes? I mean, why would we have to wait for Amazon to that? Because Fedora Cloud does way more than that. So Fedora Cloud is also, you know, you can run these images on personal hypervisors. You just boot them up with vert install and pass in the cloud, the cloud and user data, or you use, you know, you're using cockpit, like, in there, and then that actually will boot Fedora Cloud images as VMs, or you run an open stack on RISC-5, or you're running a cube vert, like there's lots of different ways that this could happen, and that's part of the Fedora Cloud thing. There is, but he's Mr. Amazon, you're Mr. Fedora, right? So, that was a blunt question for Amazon. And from a Fedora perspective, then you still have the hardware enablement conundrum, right? Yeah, well. So, yes, we could do all the things you said, but we have to have the hardware enable to begin with. It's customer driven. And for us, it's somebody's gotta give us the stuff to be able to start doing it, because... We're gonna help triangulate the giving of the stuff to help you do it, and we're also going to politely encourage all the cloud vendors, not just Amazon, to support. I mean, you can, it's on public record, Alibaba actually have a T-head, where they're developing RISC-5 servers already, and their hook, line, and sync bought into this equation. I would love to see Amazon do the same. I have nothing to say to that. I mean, you can't rule it out, but you can say the roadmap is definitely driven by, based on what we're being asked to do. And all I'll say is that RISC-5 is an interesting architecture, and I hope that the machines bring up, go better than previous machine bring ups have. Probably a small question, but do customers not want RISC-5 on Amazon? I mean, you're asking the wrong guy, but yeah. I mean, there's some other people who are more deeply aligned with the, Peter DeSantis would be the one to ask that question, not David Duncan. And if we're speaking of people in the third person, Neil Gampa is just like, well, I would like to have, as many architectures supported in Fedora, and offered in platforms, and be usable in all the different ways, but I suspect that really, it's just most people don't know that they exist, right? Like, you don't ask for something you don't know exists, and once you know it exists, and you find a reason to want it, then things, the dynamics start changing. And that's true even in the Fedora open source case, like, we started having this interest and development into all these architectures, because people come in, and they're like, we really want this, and they start talking about it, and other people get excited about it. It's that flywheel of success. So I'd just like to add to that that, yeah, it's a bit premature and early yet for enterprise servers that are risk-based, but they are coming. And if anyone's familiar with the guru, or the demigodder from Mount Olympus called David Patterson, David Patterson maintains that risk-life is actually gonna supplant all CPU architectures over the next 10 years. And he's a, you know, he's a former X86 guru, et cetera, and he's ridden the wave with arm, and risk-5 is something that's, it just gives so much choice, flexibility to the HPs of the world, the startup chip vendors of the world, customers, countries, like India, China, et cetera, that it's an inevitable equation, but we're not quite there yet. So at some point, yes, the Goldman Sachs and the Bloombergs are gonna start consuming this, and Amazon is gonna bring a surprise on us, and hopefully we'll evolve on the groundwork to enable all the hardware on donations, then we can be off to the races. I mean, that's very lofty. My comment on that is that I remember when James Hamilton said that he thought that ARM was the future, and then it was like three years before, yeah. Yeah, but three years for him to make it work, and it was very interesting to see from the perspective of an engineer, because the questions that happened as a result of that were like more economic than they were anything else, so it's very interesting, and then of course, like a series of really amazing engineers started to solve really big problems really fast, and it's just like that, seeing the way that hardware can evolve in such a very short amount of time always makes me, it always surprises me, and I feel fortunate to have worked with a lot of these people. I will kind of add to this a little bit. You know, your comment about how it'll be everywhere and surpass all the other architectures and whatever, the concern I have right now is that when you look at how risk five is evolving, and how development is going, and manufacturer interest is growing, and how CPUs and all this other stuff, I find some degree of credibility that there'll be more chips risk five than anything else by the end of the decade. I don't know if I could put a pin and say that it means that risk five will be the dominant computing architecture because of the fragmentation between the risk five variants and instruction sets and stuff. We don't know how that's gonna settle out yet, because it's still changing even to this day. But before we get onto that question, let me just say that one of my favorite places to watch what's going on that's gonna be relevant to the way that cloud images are being built is the work that's being done in the VRT team that Karen runs and a lot of the things that start to come back to, like what's being added to the QEMU KBM code bases makes a huge difference in terms of what we can support. I mean, my goal would be to make sure, to ensure that that KBM QEMU support is there and it's there in a way that we can all take advantage of. And importantly, as I was saying with the fragmentation of the instruction sets, whatever gets implemented in QEMU and that the distributions and all of us rely on, the hardware vendors have to do. It can't really be the other way around. If the hardware vendors start stratifying the eight, because risk five ties ABI to instruction combinations. So if your instruction combination isn't correct, you cannot run the application. And so we can't have a situation where something calls itself 64-bit risk five and you can't run it because you don't know, like when it tries to come up, it's like, oh, all the instructions are missing. So we can't run. Like there has to be a guaranteed baseline that everybody's going to support for that. And like even today now with Fedora risk five being bootstrapped, there are two other separate efforts going on right now that are doing the same bootstrap for different instruction combinations. And that's the part that scares me because if we wind up being in a situation where we have to deal with the different instruction subsets that are not compatible, risk five is going to fail. Because people can't figure out how to work with it. I want to make sure that we all know we're on a tangent. Yes, I am aware. We are, we definitely have right hold. But let me go back and say the tenure comment was a Patterson quote. Yeah. So put that in context. Yeah. And also that new architectures require a lot of work under the hood from all the distro vendors, upstream, kernel enablement. And that in itself is what actually causes a standardization and helps prevent some of the potential fragmentation. So that's the beauty of open source. When you're working things and it's open hardware definition and because it's in the open, that means that the definitions will evolve with a little bit of common sense from the community sprinkled all over it. So I'm not really too afraid of it going off on potentially on 50 different tangents, which it could. It could for all the edge devices, but rel is never going to run on those devices. Sure. And Amazon's never going to have to worry about supporting them. So for the enterprise server side of the equation, I feel really competent, confident that things will consolidate and come into focus because of the community effort. I mean, it's everyone here and everyone beyond here that helps create risk five and helps create Linux. And that's the beauty of open source. So to tie it back to Fedora cloud as we're talking about for this, to bring this tangent back to the nose. I don't care just about the enterprise servers or in just the cloud, but I have to care about the whole cloud experience down to the person's computer. And so, whether it's risk five or whether it's ARM or whether it's running a desktop or if it's doing Kubernetes or whatever. Or S390. Or as mainframes are special. I'm gonna, that one's getting put aside. But like when we're talking about a Fedora cloud experience for that, I don't talk about just Amazon. I talk about, I also talk about like the desktop that someone's working on to develop the cloud workload. And I talk about the experience of going between the two and all of those other things. So there has to be an underlying level of consistency at all those levels. You can't have an experience that only works in the server side. It has to work locally too. And that's the piece that with ARM, it took a really, really, really long time. It took twice as long as it did to get the server parts. The server parts were easy. Getting everything else was harder. And I am very happy and willing and optimistic about supporting RISC-5 for Fedora cloud. But we have to have all those pieces in place to make it work. And again, it'll be because we have all those component parts in the KVM, Quinnu code base, and we'll be looking for that. That's what we work on. And that means dealing with a lot of special problems like how do you get, I mean, on cloud instances, one of the things that we worry about is like, how do you get a dump? Right, I mean, how do you get a memory dump? What happens to the program crashes? Yeah, where do you get your, yeah, where do you get your debug? Those are- And that's actually a surprisingly difficult problem. It is. But we wanna be there to help you and anyone who wants to build specifically cloud images, or cloud-like images, the opportunity to have our help and to be a part of our experience and part of our, really part of our team. We're open. Come join us if you're interested in this stuff. Actually, we need you to come and join us. Anything else? I think we can call it. Yeah, I guess so. Thanks for being here.