 Okay, I'm David Duncan and I work with a lot of other people on the cloud distribution or edition and the focus of the cloud edition has been one of those things that in the past was a was kind of slipping behind because we had atomic I have the jacket over here atomic for a long time and then we thought that OS tree would just really kind of just replace everything and I think you know our our five-year plan includes the immutable OS as as a big part of that cloud initiative obviously that's the foundation for OpenShift there's no way to get around that that said I talk I work for Amazon if you didn't know and my role is to talk to customers who are using partner Linux and Linux based solutions on top of cloud cloud architecture and so I hear about what's going on what their what their pain points are what the things are that they they want to have at their at their fingertips beyond just the their modernization models right though a lot of them have solutions like SAP that run for you know upwards of 30 years and their their expectation is that they'll have a consistent experience for the duration of that time modernization doesn't really we don't we don't talk about modernization in terms of decades right not at this point we talk about servers as a pet as a you know in terms of decades and we're just good you know we're just starting to talk about as our our cloud solutions in ways that operate predictably and so I pride myself and and I pride the work that my team does on building solid support for for Linux and Linux based configurations on top of the cloud or as Peter would say maybe somebody else's computer the the the fun thing about that is that I have learned a lot of the paradigms and learning a lot of those paradigms made it just a made me a ready fit for the Fedora experience and the cloud as it was and the cloud team as Dusty Mae was turning his focus towards Fedora CoroS after the acquisition of CoroS obviously all bets were off in terms of atomic atomic was was retired and and that cloud experience started to accelerate around the CoroS experience and I have a great relationship with CoroS I love I love to work with the team and I also enjoy running it for specific solutions one of my favorites to use CoroS for is running agent based step functions on on in cloud environments and that means you can create an environment that is basically throw away you know just arrives for minutes produces the artifact and then and then is destroyed as a result of some sort of state function very much enjoy that but today I'm here to tell you that we put a lot of time and effort into reigniting cloud as as an addition and as a as a community this was a really unifying experience it also brought us to a place where we recognize that there were some different goals around cloud than there ever had been before one of the things that we talked about a lot was was the the concepts around what is a cloud image right if it's not just exactly the same thing as as a as a certain you know as the server image or maybe the workstation image then what really is it and some of those questions are answered in the way that we leverage the cloud configuration or the the images themselves because they become extremely versatile we don't just create a raw disk file we create a solution that is kind of a minimalized version where we know that we're using utilities that are consistent with the expectations that customers have or users have really users have in there just in in the breadth of their experience right so it doesn't boot differently than other you know than that on one one platform as it does another the expectations are you know we try to maintain those as consistent now we are also trying to work with with to increase the documentation and that's another thing that you know I peers in the room and I'm kind of excited about it because we because like I have all these initiatives that that probably land and things that he's governing at this point around cloud and one of the things we we need desperately is we need help with our documentation and and integrating with the team and to do better documentation I know you are I had I had someone who committed to it but then he is a volunteer and so we we work on his time you know as he is available and then give him as much help as we possibly can excellent I'm grateful so so that also puts us in touch with the website's group infrastructure on a fairly frequent basis and it gives us a lot of initiatives around the other around the cloud providers for me you know the goal with Fedora is to extend that functionality across any provider that were that is willing to work with us right and right now we're putting a lot of time and effort into focusing on Azure and ensuring that Azure works that has been kind of an down kind of experience but in the last in our last release we've had to make a lot of security modifications so if you haven't noticed a several of our change proposals that have gone through have been around like disabling support for non-tokenized communications with the metadata services ensuring that we have support for faster network interfaces and ensuring that we're we're getting the right kind of support for for that in the in the configuration of the image creation that said our image creation is done using a terrible tool I'm okay a wonderful a tool that was wonderful in the time that it was heavily maintained but we have other we have other options now and and I think that as we look to the future we will see the well we'll start to use more ansible rather than rather than using additional code development for our DevOps we decided that we would try to build our own collection so we're sort of working through building a collection that that will that will support our requirements for upload into Oracle into Azure into AC2 and into and into GCP and we find that you know that that's really where we want to be so when you think about where cloud belongs cloud kind of extends out into the experience of the vagrant images we have an expectation that we're going to extend that cloud image into the WSL the work that we've done around you leveraging Kiwi in the context of Koji has given us the option of creating our own WSL image in that's consistent with the images that we're creating for the other cloud for the cloud providers that that will give us the option of kind of refilling that space that there are other people who are doing a great job of of re-rolling the cloud the cloud packages and then placing that in into into the WSL directory but we'd like to have our own so I guess here we have a packaging strategy that we're just forming it's it's and that is similar to the way that the newer for Dora team has has settled into their packaging model we want to have several packages that are associated with things that are that are central to the cloud providers all sort of centrally located inside of the cloud team and the reason we want to do this is because we don't want any one person to have to be responsible for the packaging model or you know enduring you know making things work all by themselves we'd like to have this be a collective where the those those packages that are necessary for say you know the the Google compute CLI the the AWS CLI other other types of tools like the cloud development kits that come from from these groups cloud shell integrations in the desktops we want to make sure that all of those work and function in a way that is consistent and that we have a unified voice in the way that we communicate back with them if you haven't looked at some of those those CLIs or or the well I mean anybody who's worked with cloud will probably know that they've been bitten by the by the experience of having a cloud providers utilities that are based around an earlier version of Python like Python 2.7 in the modern world and and that that gives us a lot of pain pain points for for trying to integrate and we want to have a unified voice on that other things that we run into is we have cloud providers that were or developers who will constrain the requirements so if they're using a Python based CLI that Python based CLI may may be leveraging some you know version of pigments or some other package that is consistently lower than what we're we're producing for Fedora and we have to go back in and relax those requirements I mean this is kind of a common thing and a lot of software but but but it's something that we really don't want to do we want to make sure that we have consistency across the board and I do this with my you know with my in my professional life I do this across try to do this across the board but then here in Fedora you know this is the unified voice that I want us to have and make sure that we have that as a group so that kind of gives you where where it is that we we see our strategy and where we see our position so moving on to that we also see the cloud being very much integral to the workstation experience obviously a lot of our testing goes on in a in a server environment in or an open QA in ways that is just sort of basic and we want more advanced configurations we want to look more at what the boot boot requirements are if something fails in a different in one of our cloud environments we really want to take it you know take it back to you know a server that we own or works at work station that we own and and take it through its paces to determine if there's something that we can we can locate there just as easily as we can trying to do debug with no serial console so and I see us as sort of in in line with the Fedora server model the server model obviously has a lot of things that we don't need so we don't require DNS that's a service that's you know that's expected to be provided you know we would maybe provide some sort of an intermediate faster response but we won't we won't provide the full you know full fledged their full-fledged DNS support we don't need DHCP obviously because there is already a DHCP provider there's no way to get an IP address on any of the cloud providers without your without having your own DHCP assignment let's not talk about IPv6 though and so you know we try to keep that image minimal but that said you know there are lots of things that we can do the one of the IDE that I put on here and this is this is like a an idea that I feel like we we have we can flesh out more is that most every cloud provider has some IDE model that they're using that is cloud-based similar to the one or I'm sorry it's almost web-based right it's a web-based API or interface like cloud 9 and the cloud 9 API sits on top of some instance somewhere and today that you know like the the team that's responsible for it they're responsible for they they publish instructions on how to make your own but then they don't actually make make one that is associated with Fedora so I wholeheartedly believe that that is something that we want to provide for the for our development engineers who are working in that space who are interested in continuing to work in that space and make sure that they have the a segue or entryway into the Fedora community and our projects without having to change what they were doing previously just just introduce Fedora into their workflow Kiwi with Koji is one of our big our big paths forward and the reason for that so we like OS build and we like the way that it works we love the way that it functions as a service it has very it has a very clear position inside of the community and our our efforts but right now there are a lot of things about it that are that limit what we can produce including the WSL images including container-based images including images that are associated with Azure in the way that are expected and so we want to ease our way into what's being done inside of the Fedora community but we also want to introduce Kiwi because we think it's a very effective tool it has a great way of layering the configs and we can we can have a composable configuration that we can break down and not be responsible for everything and that is one of the reasons that we've chosen it as a as a part of our process and all of our tools our current tools are in our in maintenance phase we're producing architectures for ARM and for x8664 and we have special friends inside of the Red Hat teams who are producing s390x machine images for us and then reporting back if there's any inconsistency in the way that those are functioning so these are the architectures that we're supporting I would love I keep I keep hounding our the internal guys at EC2 to give us access to the to the Mac instances so that we can we can try out this ASAHI stuff and we're working on you know we want to see as many customized images as we possibly can see if there's if there are things that people need in the context of a specific cloud provider we want to make sure that we're capable of producing that and that we have in the context of these QE configurations a component a composable component that will allow us to make those my whatever minor tweak or major tweak needs to be done for that specific environment so let's say you have an ARM64 instance and that ARM64 instance requires IOMMU be disabled right because there's no that that caching layer is not not actually there in the in the arm the arm architecture it's only there in the Intel architecture and accessing that cash cash can then add incredible latency to your and context which is to your communications with with the processor and we want to make sure that that you know those customized configurations are in fact included inside of those the agents that are associated with Google compute platform versus the agents that are associated with Azure we want to make sure that those are specifically addressed but then we also want to make sure that you know we have a generic experience that customers can or users can can can can take advantage of in their in their process as well because we don't want them to lose track again we don't want them to lose track of the things that they've already developed and have to move to something different just because they've moved into you know into they've decided to use Fedora so that's one of the reasons that we think ignition is a great idea but we don't want to forfeit cloud in it right so the package dashboard integration is something that we're working on right now a few of our you know a few of the packages that we're working on including in that are the hibernation agents for the EC2 instances and and then the the Windows agent actually includes that same kind of hibernation that one tends to be something that you wouldn't want to put anywhere else right because they modify the sleep.sh files and and modifying the sleep.sh is kind of a no-no anywhere else like you wouldn't Peter's not going to do that on server and and and so you know we have to make a conscious decision that this is something that's going to be beneficial outside of the outside of the main you know our main distribution objectives but that said like making that decision doesn't mean that we you know we immediately leave the you know take the cloud cloud provider off the hook that means that we work with them upstream in the way that Fedora is supposed to right which is we we go back to those service teams and say guess what you know you're modifying something that you should be pushing you know that your modification should be pushed upstream and people should consider these to be consistent with the way that you're you know you're supposed to handle that. So we want to make sure that we have that that parity. So here's a few things that I am asking for. We need test plan updates like we have lots of things that are not being tested as well as we as we wanted to. We need support for our Fedora cloud test days. Fedora cloud test if you're not familiar with the cloud test days these are wonderful things for us to get into and it's a great place for us to have more automation. So some of you guys in here from the Ansible group and you may know a little bit more about operation or automation and automation platforms maybe but we'd love to have your help in ensuring that a lot of those two those are are ready for you know ready for use and ready for testing and we can get some get a lot of feedback very quickly. ButterFS if you don't know is an important part of our process and this is something that separates. So you'll see you'll see the work on Enterprise Linux next and how that Enterprise Linux next is kind of deviant from how the Fedora cloud image looks right. We have this base of ButterFS similar to the way that Workstation does and one of the things that we wanted to do here was to create an environment where we could really separate that experimentation and provide a lot more connection back to our feedback on things that might be beneficial far in the future or you know even closer. But it gives us an opportunity to do some of that exploration and one of the things that we want to do that is directly related to some of the objectives around in the Red Hat experiences to provide some microkernel models that people can experiment with. So we need lots of documentation. I'm short on time so I'm just going to say that right out loud. These are some of the things that I think would be great for us to have as companion guides and to have as more friendly documentation for each one of the cloud providers. And these are some of the things that we're doing the cloud nine integration. We're looking at doing Neuro Fedora as a connected image with the NVIDIA controller drivers already established. A lot of the cloud providers already have distribution rights and we can produce images and those images can then have the associated NVIDIA controller drivers integrated back in. And then we're also working on the workstation model so VDI is another thing that we think is really important. We'd like to do some more of the integration with the with the HPC technologies like parallel cluster. We just really want to make things more flexible, more agile, bring more of this opportunity. We have another talk on Thursday about about building your own images and how we build those images. And we really want to take this all the way out, right? Like this IoT experience can definitely be brought all the way back into your cloud experience, whether you're running it on top of your server or you're running it on top of a cloud provider. So that I'll take any questions you have. I appreciate your attention. There's a lot of attention in the room and I appreciate that. Grateful. Any questions? Well, if you find that you have questions, you can find us here. I'm here all day. And I always love to talk about it. The cloud sync meets every other Thursday. We made on an early morning so it's not a very APEC friendly time frame. But happy to move that so that we can increase the amount of participation if we need to. Probably got 20 seconds left. No, I was just going to ask. So how did Amazon take your suggestions on how to fix the EC2 hibernation agent, you know, to make it more upstream friendly? Honestly, they took it really well. There's a specific engineer who made his first kernel commits because I was collaborating with Dave Chenner on how we could make it work. And we had run into a bug. It was an NVMe bug associated with XFS. And Dave and I were talking back and forth. And I said, you know, this guy Shao Yi is working on this bug that's associated with this. Can you help him get this kernel commit done? And he literally, you know, coached, you know, the two of us together coached him on making the commit, doing the work inside of the LKML. And then Dave took his pack, like there was, you know, obviously this was his first patch. So he took it with, you know, some spacing issues and tabs, space mixes, which probably wouldn't have happened if he hadn't, they hadn't already had the kind of behind the scenes conversation about what their goals were and how to attack it. But it made him a first time, you know, kernel committer. And that to me was like the, you know, experience of a lifetime to say that, you know, this is this is EC2 hibernated agent. And the reason that this, you know, Shao Yi is working with with confidence on the kernel is because we had this conversation in the context of Fedora. And we had it with the with the the owners of that of that code, who also happened to just just just so having to work for Red Hat. So cool. Yeah. Well, I know I'm out of time. So I'm gonna say that I appreciate everyone coming and listening. And if you want to participate, please let me know. And if you want to talk further about what we can do to make, you know, to streamline how I interact with the teams that you work on, really looking forward to doing that. And for years, the way that we've done our uploads, but image factory really only supports one cloud. And if you look at it, the code is built around lib cloud. And lib cloud, if you haven't, if you if you don't have any, you know, if you don't have cloud experience that goes back to Joyant, right, which are the people who who brought us NPM and node node JS, they brought us all of that. Joyant was was a company that was first on the scene and eucalyptus was there and then lib cloud was built. I think it was built by Matt Garnett, Matt Garnett was probably was part of the lib cloud team. And Matt went on to do the Bodo three libraries for Amazon and he did those independently. And then Amazon said, you know, we'll take over the maintenance for that we'll bring bring a whole team to it. And, and that's how the Bodo three library became Bodo three and Bodo core and the AWS CLI. But the, but the reason we wanted to do an answer collection is because lib cloud was one of these things that people use as a result of wanting to do a generic experience, right? They wanted to create a generic experience around it. And lib cloud was built around eucalyptus and red hat had an initiative when joint was nascent and the concepts around eucalyptus were, were still called just distributed computing. And, and they, they had a program called Delta cloud, which still exists. And lib cloud is a component part of that, that Delta cloud initiative. The Delta cloud, all of that is based on two dot seven. And of course, it fell out of it fell out, it went into the Apache project and did what things do when they fall into the Apache project. But they I mean, there's still people who are maintaining it's great. They're just, they're just not making advances on it, right? So, and they don't fix a whole lot of the bugs, they just fix the ones that are, that are critical to whatever it is they're doing. And we, our use of it never got beyond the AWS configuration. So we don't have any advanced support, there's clearly no support for the OC for Oracle, you know, cloud cloud infrastructure, and never will be. So we know that we have a mission there, which is to create that that consistency. And the way that we thought was best integrated with the with the way we work is, is to leverage ansible in the same way that infrastructure leverages ansible. And that way, we would have the ability to just push whatever into into that ansible playbook, and then the ansible playbook could be responsible for the image creation. There are things inside of image registration that wouldn't necessarily lend themselves to that in like the red hat images, because red head images have assigned billing billing codes and things like that, that are kind of hidden behind the scenes. And we probably don't want to, like, they're not, they're so not useful. They're only useful to like, three, three actual users, users of that, you know, the, the, the image registration. So they would never be would never be a prioritized function. But it's something that we can use to build very specifically images that are that are leveraging the same kind of components that we have, both in the kickstarts, and also in the in the cloud in the cloud utility in the image factory that we use today. They integrate really well with Koji makes it makes it super easy for us to do an event driven architecture. Once we have a consistent image, and that image goes through tests, once the test promotion happens, we can automate the deployment based on that collection. The collection itself does some things that are not standard. So all the ansible guys would kind of be like, yeah, that's a nice idea. But you're still taking a lot of requirements for other collections to make this happen. But I think that I think we have a fairly good foundation for why it is that we're supporting additional collections in there and, and leveraging collections that have have some support today, right? So GCP, the Google Google compute has is a supported configuration for ansible automation. And, and the same with the the AWS. And we have some other things that are coming up in Azure about this. So that gets us our primaries. And then we can, we can leverage some basic, you know, basic commands for some of the smaller providers, like more, what they call managed clouds, right? Rather than the public clouds, the managed clouds were where we want to make sure that we still have images on the managed clouds too. And this makes it possible for us to have more of a collaborative experience. So if you're looking at what I think is really important, for me, what's really important is making sure that we have this kind of distributed architecture that's easy to, to drive with event driven experiences around the QA teams results. What else would you like to know about the cloud? We have so I'll get more into the cycle, I guess. Now that I realize I have more time, this is this is going to be more fine, actually. So one of the things that I got it, fine. I'll do it from here. Okay, I'm going back. So I'm going back here to this one. Okay, so this is kind of an exciting part for me, the Fedora cloud for IDE. The reason that I think that this is, this is kind of amazing is it represents a lot of things, but RFS being one of them, right? This is totally deviant from what you would expect a cloud image or I'm sorry, an image to do, right? Like if you, if you were pulling together a spin right now, you probably wouldn't pull together a cloud nine spin, right? And the reason for that, that our ability to do that is because we can do a whole lot of this in post, right? So I can do this in ways that are consistent with whoever has the has the contract, right? So if I look at what happens in a service team at at Amazon, they're required to use the EC2 image builder. And I can build a document that is just as easily leveraged inside of the EC2 image builder with Ansible that as I could one that was used, used inside of the Ansible Automation Hub. And that way, if we have an event driven architecture or requirement for building something that has a, you know, zero day exploit, we can literally roll that into a golden image pipeline. And the golden image pipeline can be associated with like producing all of those images. And that kind of is something that's really exciting. The images we, the, the, you know, so if you're looking at that, you can think about it in terms of like characteristics of, of other service workloads. And my goal is to become consistent with the requirements that Steph Walters has around software as a service. So basically supporting a lot of the work that's being done on top of OpenShift and OKD in, in the, in the community to identify how we can, we can build software as a service. And one of the ones that, in fact, one of the guys that's on my team at, at Amazon has kind of adopted the concept of doing PAGR as a service. And, and to leverage that as one of our, our ways of producing a visible support for a, an application on top of, on top of the, both the Fedora Cloud experience and also the, the, the OpenShift experience. We don't want to, we don't want to muddy, like from a cloud perspective, I don't want to muddy the experience that customers, you know, that users have around OpenShift and the experience around container-based workloads. I want to enhance that. And then where they have specific techniques or, or, or expertise to ensure that they have supplemental models for that. And then to provide these things like the, the cloud, cloud IDE where they would not have normally chosen to go to, like Eclipse Che to do their work, right? They, they would have already, you know, they would have been tinkering around with Amazon Linux under the hood and, or, or Ubuntu. And they think that they think that this is a great fit, but they're really looking towards a rail architecture or infrastructure. And they want to know what that rail infrastructure is going to look like when they're doing that, doing that configuration. And then we can do things like because we're having, we have this package maintenance group, we can take the package maintenance for things like the cloud development kits, and then take those cloud development kits back into those IDEs immediately. And then we, we can have sort of a point in time support model. So we know where our support, our support exists. And then our customers can understand our users. Sorry, I keep saying customers because they're not customers, they are users. And, and our users are, are capable of making decisions that qualify their workloads and in ways that they were already familiar with. So the strategy is meet them where they, where they're living and where they're, where their technology is, is, exists. ButterFS, again, I'm, I'm, I'm jumping around a little bit. I'm sorry about that. But, but the ButterFS decision we made supports a lot of sin receive models for snapshots and snapshotting. And obviously we could have chosen a much larger file system. But that's already kind of covered in a lot of the, a lot of the, the, the larger cloud providers like the EFS utilities for, for the extended file systems, which uses NFSv4 to create parallel, parallel files, file storage across multiple availability zones. And we can, we can take advantage of those EFS utilities to just mount that up on, on the instances themselves. And that's that was actually in, in a lot of the OpenShift configurations, based on some of the stuff that we were doing in experimental work around Fedora. So Fedora Cloud. And, and the ButterFS we, we've been looking at ways to to modify the, so Kiwi gives us some other flexibility around, around ButterFS. It makes it possible for us that to, to, you know, leverage techniques that you would have only gotten with LVM as a foundation. But LVM is terrible on, on, you know, in the, in the context of the public cloud. Not because it's not a sound technology, but because the drives themselves are already striped. So anything that you're actually using in NVME drive, you're not using one. You're using multiples underneath. And the LVM structure doesn't add any ability, like the, the increase and decrease of the volume size is already there, right? You just have to increase and decrease the file system size. And if you need another partition, that partition can take advantage of another EBS volume. And then you don't decrease the IO performance that you have on the volumes that, you know, where you need it. So if you're doing something like a var log or slash var, and you're, you're, you're creating a database and that database is, is living inside of that EBS volume, you don't have to sacrifice your operating system performance to, in fact, use that. One of the things that we can take advantage of with ButterFS is we can, we can use the snapshot send and send and receive the, the contents of the current var to the, to the new EBS volume with whatever, you know, IOPS reservations you have in total. And then that, that can then go back into place on the, on the instance, right? So now I can mount this new EBS volume with all the content that I had in the, in the original slash var. And that just is a snapshot. So there's no, there's not, there's not a, there's not a waiting period. There's just, make the send, quiet the database, new, you know, push it, push them out and then, and then you're done. So minimal downtime, we can, we, you know, we can take advantage of a lot of those that flexibility inside of the operating system. We can shrink the operating system, which is one of the other things that we really like. And, and is a big deal, right? Because a lot of people, I mean, I do this, I'm totally guilty of this. I will stand up an instance, build CentOS or Fiora images for, for marketplace, and then I will shrink the volume. And, and that means that, you know, I can have, I can have a 60 gig volume for a few minutes while I create my artifacts and then I can decrease it down to 20, make a snapshot. That army is, is exactly what I need for, has, it has all the content that I need to just make an improvement to the, to the base disk and then shoot that back up to the, to the marketplace. You mentioned the Amazon Linux space also, changing up the door. It is now, yeah. So, what is it, what the relation be here? Oh, okay. So confused here. Well, I mean, we work on, so I, I work with the Amazon Linux team as well. So when we first, so long time ago, this was, this was the idea of Amazon Linux, what, at that point was 2022 and then it became 20, it slipped, was envisioned by Max Steveak, a former Fedora project leader. And Max wrote the documents, created the, the vision, you know, for what this would look like and how we would do the implementation. And then he got another really great opportunity. He's working with him now. And, and he, and so one of the things that, but he left us that legacy, right, of what, of how Amazon Linux could be associated with the upstream experience in Fedora. And originally the goal was to branch at F 35 and to use F 35 as a, as a, as a foundation for the first version of Amazon Linux 2022. And the concepts there are twofold. So if you are familiar with Amazon Linux and so this is a great yeah, this is a story that will make, make Peter's hair curl a little bit. This is, is Amazon Linux was originally created because they wanted to excel, the senior, senior leadership wanted to accelerate their support for new hardware. And then originally they built on real five, right? So if you're familiar with that real four and real five or the, the original foundation for that, that whole cloud, I think Matt Wilson and Christian Gaffton, a bunch of other guys like that. The, but the vision they had was, was to move a lot faster. And so there are the options for that were to build their own distribution or to do something that was a little bit derivative. Well, they started to take a lot of what was being done in the CentOS team and, and kind of move that into, into the Amazon, into what they call the Amazon Linux. So removing the trademarks, doing all the things, focusing on the hardware that they had at hand and the modifications that they did to their hardware. There's lots of things, there's security chips and all sorts of things that you can confirm landing in the actual flash room, you know, things like that. And, and so they wanted to make sure that they had those improvements in place pretty, pretty much immediately. And then they thought, you know what, we can make this available for customers and customers can use it too and we'll find out more about it, we'll get more bugs. We all know this story, right? We value CentOS. So, so the, the goal there was to keep that consistent for customers. And so one year turned into two, turned into three, turned into four. All right. And then there was a PHP version. So there's PHP 4, then there's PHP 5, there's PHP 6. And there's all these customers who are using it. And there's all these people who, you know, this is the, we had the business discussion right earlier. Here's the data driven part of the, the complexity of the data driven part is like, well, we have all these customers who are running PHP, but we need the, you know, but they're, but they're running PHP 4. But these customers need to have PHP 5. And so then you started to see this like hodgepodge of things that were, that were there. Something that very similarly how it was happening inside of the Red Hat community. And so, you know, that's the birth of modularity, right? Right there is, is finding that that experience. Well, we all had the same experience around trying to maintain multiple versions and software collections and all that. And, and so it was on Linux 2 became this really complicated, hard to maintain, way far away from, you know, way divergent from where it was originally. And so, you know, what compatibility was was not was not an option. But, you know, we still maintain kind of the same, same structure as the CentOS builds and it made it easy to use a lot of the, the real originally the real five stuff and then the real six stuff and then real real seven and then some of the seven and eight right around different kernel versions. And then so Amazon Linux became Amazon Linux 2 and the Amazon Linux 2 kind of tried to push, push the GCC much farther. And then Amazon Linux 2022 was coming up. And then that structure, the restructured, we reord, you know, go ahead. Things, things happened that impacted lots, lots of people's expectations. And, and that slipped into Amazon Linux 2023. Amazon Linux 2023 ended up for a call for but branching from F 36. And there was never any goal. This is one of the things that that Max set up. Max never had a goal of being 100% ABI compatible with with Red Hat. His goal was to be 100% focused on customer problems and customer customer experience and to go through that in the context of what was what was the inside of the distribution. That said, that they don't have so, you know, they have very specific goals around support, right? They want to support machine, the machines that they have in their data centers. They want to support the configurations that are important to the services that are run. So that makes it much different than what our goals are in Fedora. My goal in Fedora is to ensure that if there is if there is a workflow that we can, you know, we can help a user integrate, then we want to make sure that we're helping them to integrate that in the context of the Fedora experience, right? And leading them into the workflows that we are perfecting, right? So, so like where we might have very specific goals around ensuring that we have like a baseline support for for butterfests and for, you know, other component parts that the Amazon Linux team will be focused on what their performance requirements are for UBS volumes, what they're with what the Lambda service needs in order to have a good foundation for container based workflow. And they don't care if like if the packages in is if there's an extra Linux, you know, an Apple package that's associated with it, that's not important. That's that's more like that that's a, you know, you can go and bring that compiler, let us know if there's a problem, right? And they do they do a public PFR. Just like we do. And and they they work off of that based on based on the data driven approach, right? They prioritize based on customer customer demand. We prioritize based on where our expertise is and who's working on the project is one. Who's dedicated? Yeah, so is that is that a and I think that, you know, if you looked at if you look at the work that, you know, that Microsoft did the flat pack or or Google standard program that you would find that you may have the same kind of the same kind of experience. And we've made some really interesting, we've had lots of really interesting conversations. And I can only I mean, I can talk about a lot of the ones around Amazon, just because I'm so deeply involved in them, like like when we were talking to the SDK team and help, you know, helping them to understand what our what our problems were and how we could, you know, how we could help each other. The one of the great like this is another one of these great moments, like like getting a like having, you know, someone make the first commit is we were working with Kyle Knapp, who is responsible for most of the work that's done on the AWS CLI development and Tamash Tamajic asked if he could be a part of that conversation. And when he started down that road, we started having a very serious discussion around packet and the packet implementation inside of AWS CLI 2 as a first point. It's like it's a it's the gold standard that I'm leveraging to talk to other teams about how they can how they can integrate back into that. And, you know, not just not just Amazon that Google talking to Zach and his team at Google and talking to a deputy of his team at Microsoft to make sure that they understand that we have this we have this consistency model inside of the Fedora Fedora experience that will give us in terms of in terms of flexibility and packet integration because AWS CLI team like literally they release every day. If you're paying attention, you know, BOTO 3 is out and then out and then out and then out and it's painful if you don't have if you don't have a good automated process for dealing with it. And Nicola Copa on Thomas's Dimash's team is co-maintenor, really primary maintainers, as far as I'm concerned, with me on that on that. So in the project. OK, great. I'm out of voice, too. It's a good time for me to take a rest. Any other questions position specific technology support something you're working on if you would like to bear the rules on that we can maybe help you then versus the Max. No, I'm an Emax man. So my life is all about the list. But but you know, Nana works, then works. They're all just an install. We have that's another thing is that the infrastructure team creates packaging for the for the for the updates and the repos in each one of our individual individual standard regions. So like whenever you're pulling whenever you're pulling your updates for Fedora, you're in fact, not creating egress charges in the availability zones. You're still pulling from the from the same locations we use cloud front as a foundation.