 My name is David Duncan, and I have been working alongside Dusty Mabe for a while now, making sure that we have the right kind of resources, and just to let you know, we're still here. We're still working, have a lot of things that go in this year, and I think that it's been a really fun time just in the last quarter to be working on some of the things that we're doing. Just to give you kind of a clear idea of what it is that our mandate is, we're building the cloud images that are specifically for public and private cloud configurations, starting with just the simple OpenStack images and standard compressed raw, but then doing some specialized images for the requirements that we have for Google Compute and for Amazon. We're not doing the Azure images just yet. I'd love to see that as a part of what we're working on, but it is pretty easy to import the images to Azure for use if you're looking at it. And then also the vagrant images we've been working on, and specifically the architectures that we focused on are X8664 and Arch64, and just support generally from QEMU. Some housekeeping. Recently, the meeting moved from Tuesday morning, Tuesday at 1400 UTC to Thursday, and so now we have bi-weekly meetings and a lovely function there showing you that we're in the, we have the IRC channel for Fedora Cloud, which is then connected to Matrix directly and the email list for the group if you have any questions or want to participate. This is where you want to go. So I'm going to talk a little bit about a few things that we've done here very recently, and some of the outcome there. We'll start with the hybrid boot. If you don't know, the cloud images themselves were typically set for one boot type or the other, and the reason for that was because the X8664 images were first. But then when the ARM images were created, we started, there was a requirement to handle them via UAFI. But we got other things that are coming down the pipeline like support for secure boot. That's an important part of supporting Azure, and we wanted to make sure that we have everything together there. And it was just common. The red hat images are set to hybrid boot. The images that we were working with for, or SUSE's images or the open SUSE images were also set for hybrid boot. I mean, it just seemed like it was time for us to catch up with that. So it's super exciting to see that as an initiative for the next generation of the cloud images, and hoping that this suits your needs. And if there's something that you can do to test this in your environment, we'd certainly love to hear your feedback. This one was, this is super exciting for me that the use of ButterFS by default is a file system. So we modified the file system for the Fedora 35 to include ButterFS instead of the extended four. And so there's a few things that I think are really exciting about this. We kept the standard single root volume format, but ButterFS requires the slash home configuration be there as a sub volume. So that was the one requirement we had to create a sub volume that doesn't really affect the sizing, but it just changes the logical partitioning. And I thought this was kind of this was a really great idea because now we can do some things that are specific to next generation hardening requirements like, you know, in in configuring boot boot time for for like a hardened image in commercial Linux, it requires a fair number of individual EBS volumes be created and and attached at different locations and delays in the config delays in this boot up or instant start or reboots can potentially lead to places where that that like volume heavy configuration to get a hardened or or modified instance can lead to just a miss on the on the boot itself. And so super excited to see that we're, you know, we have this opportunity to create sub volumes and then to make changes in the sub volumes that are similar to the ones that we would have normally undertaken on a standard root volume, maybe on premises or something like making slash temp no exec, right, we can create a sub volume and then make that no exec. So as an image this starts to fall sort of directly into future plans for creating some more simplified hardening models. And and just gives us some general recovery configurations that we can do, adding redundant metadata, making volume snapshots super easy to work with. I don't know if you've worked with cloud snapshots but doing block level replication of a volume takes a very long time. And if you want to do something that's a little bit more simple, you know, but RFS is very much equipped to handle doing snapshot sends and other things that are a lot less complicated than we can. Then what we have to do to make dramatic changes in configurations on specific volumes in cloud cloud environments. So looking forward to this. I feel like this one was a was a bit controversial. So if you look back through the information that was or the just the discussions around the use of butterfests on the images. There's some conversations about whether you know about the potential that this doesn't have necessarily have an immediate effect on the on our downstream support and but it still seems like the right thing to do. We're not, you know, we're not behind the times in terms of of where other distributions are going and we just felt like this was a really good a really good time to do that slide. One of the other things that we came across here recently in the config was that the the vagrant images were giant, they were huge and Chris Murphy and Neil squash this this issue. There was a process for zeroing out the the entire volume and the trim was not, in fact, correctly executed. So, when the vagrant images were created, they were still the large size that they were when they were when the kickstart configuration was built. The result of this for the Q Cal images was that they were right size down to the smaller volume type but the but for but the decompressed volume for vagrant ended up being super large and and Neil thank you for making the modifications in the kickstart to made that made that work. Another thing that's going on is. Yeah, you really really did a lot of work on that. I'm super appreciate it. Another thing that's going on now is the website revamp. So, dusty and I worked out that I would work with the website team on the revamp. And I attended the first planning meeting but there's more more. I mean, I think there's a lot of changes here that are expected and so we'll try and keep this in. You know, keep our keep ourselves in in the in the forefront of the of the planning. We want to restore the position of the cloud images on the splash page so that they're easier to discover and hopefully we'll have some ways to get to specifically the images that customers want to use or people want to use. And then we'll spend a lot of time reviewing the content together so I'm looking forward to as we as this revamp starts to come together working with the rest of the members of the special images group to make sure that we're getting the right kind of representation. No, what it is exactly where it is that we're we're moving and then I guess I didn't finish my sentence. But our PRD needs some some review and maybe some narrowing of scope needs to be a little more crisp, not to say that it's a bad document but that you know maybe we can maybe we can hone this down and look at what our goals were then versus what our goals were. Now and put this into a better perspective for ourselves. And I think the website revamp is a great opportunity for us to do that. So let's talk a little bit about what it is that we can do next night. You know, it's interesting to me to know that like keeping the lights on is a very important part of what we do as a cloud SIG. Jeremy Adder one of the open shift engineers said to me one time that the most important thing we can do is to provide dial tone service and I really took that to heart. I think that you know what we have to do as a team is to make sure that all of these insane things that we have to figure out like what Neil did with the with the volume size. Have to get done so that the customer, you know the the experience with the cloud images is as consistent as it can possibly be. But there's some questions I think we should answer like can we can we work with the neuro fedora team to build some spans that make that make those images useful in the context. Of cloud environments. I mean there's all these other service tools that could be integrated and and it would be really cool if we could build something that was sort of ready made and hand that out. And that brings that begs the question of what are we doing for task specific workloads and how do we help people. Yeah, VDI isn't is exactly one of those spaces that would be really interesting. I think because there's a there's a number of products out there that are that are free or free free for use when when they're in place and I think we can have some if we make it easier for for people to use them. I think that we can we can put some pressure in some places where some of that is not yet open source but it's still being considered for open source in terms of the VI and other other other gaming and online experience. And I'd love for us to do so I kind of have an answer in mind when I asked this question right should we collaborate with the OS build team to provide solutions and I mean I think that the answer is a resounding yes, but then on the other side. There are there are things that need to happen for us to be able to do that right I see Neil, you know shrugging in the in the comments and I, I'm like, I know what he's talking about we've, you know, we're at a position where we can support, you know, the OS and our configurations they're not quite there. So I think that somewhere where we can, we can start to really have more heavily heavy collaboration and determine how we can really sort of next keep that next generation image building process in mind as we as we continue forward. So some of the things that I think we should we should look at and and and consider as a part of our charter for, you know, going forward is looking at how we can evaluate and look for alternatives in terms of faster boot times faster boot times seem to be a very cloud enablement heavy metric. So I'd like to know where we are. And I'm hoping that we can we can look at that as something that we keep track of and then understand and use as a metric for ourselves to determine what it is that we are, you know, whether or not we're successful with our improvements. I'd like to look at new sub volume configuration. So now that we've moved to butter if s I think that we are going to have some more interesting configuration requirements for for for the cloud images not just, you know, not just for one environment but in general I think we'll find that we can we can make some sub volume configurations that are really helpful. So I'd like to explore that and make sure that we're not doing that and ahead of, you know, ahead of the expectations for people who are using them in their current state. And I'd love to see us work with the CI team to increase the package gating and be able to report back to CI when packages that we think are important for the, you know, are critical to the success of the cloud cloud images, like, yeah, I mean that would be that would be great Chris might be might be fantastic if we could get the open q a to start timing that boot that boot experience, because I certainly don't want to do it for any specific environment. I want it to be something that we produce independently. And in in an environment where we control everything. Yeah. Um, so the same thing you know bootstrap that same CI gating with various cloud systems and providers, and that's not to say that I want us to do that work but that I want to I want us to encourage them to provide us feedback. I want to provide I want to encourage them to participate in the CI program and pass that information back into the you know the the data gripper so that we can get that that information back into or their success or failure of image changes or package changes associated directly from those cloud providers and and then have that you know have the conversation with the package maintainers. Right, we just don't. Oh, you mean so open q a is already doing that. Cool. Well, we'll we'll look into that more. So and then containers, the idea of adding containers under the cloud sig. I think that was something that we've you know we've made some changes that were associated were directly associated with the the containers and there are some changes there that are associated with the the butter a fast configurations that we've already done that have been a little bit complicated to gain support for I'll say it that way. And I would love for us to be able to help increase the awareness of those of the fedora containers and make sure that they're in all of the container registries that anyone would use not just people who were expecting to find it in the locations that were associated with their red hat configurations. And then obviously we just want to be able to do some profiling like some basic profiling to ensure that we have some performance comparisons and that that telemetry is something that we're keeping our eye on as we get best as we get better at putting these images together. So with that I'll I'll leave this on to questions and discussions if someone is interested in coming back and coming on and talking with me directly that's great if you if we have anything that's in the q amp a I don't see any q amp a just yet so but the chat's pretty full. But I think this is where we are and and some of the things that we want to work on you know moving forward. And I'm encouraging everyone to come and participate with us or file bugs or talk to us about the things that we've done so far or the changes that we've made here and in the first half of the year and and look forward to doing some more great work over the like the next half together. Any other questions, comments, more comments. Cool. Well, I'll leave it there and I look forward to hearing from everyone and really appreciate you being here with me while there's so many interesting things going on at nest this year. So thanks and have a great rest of the conference.