 So welcome everyone. This is the Fedora and CentOS hyperscale and cloud meetup and your host today is David Duncan. He is the current organizer of the Fedora cloud working group and a member of the CentOS hyperscale SIG. He is also a partner solution architect at Amazon Web Services. And if anyone would like to join the meetup, please just ask to share and I'll approve you and yeah, let's have a great time everybody. Yeah, let's have a super time. So thanks everybody for coming and just wanted to make this an opportunity for us to talk about what it is that is going on in the various SIGs and working groups and to talk about how you can participate if you want to participate or if you are participating to just have another opportunity for us to just hang out and say hello to each other and talk about what's going on. I want to start with a hyperscale SIG. I mean, there's so many people who are here to talk about, you know, talk about what's going on there. We've got David here to talk to talk about it from the from the leadership perspective and Neil, you've been doing so much great work around the kernel here lately. It'll be super fun to share some of that. So, David, can you just tell us a little bit about what the what the vision of the hyperscale SIG is? Sure. So the general idea behind hyperscale is to have a place that people and companies can use to try and do work that benefits large scale environments in the open. There's lots of companies that use Fedora and send us in large scale environments and all of these companies tend to kind of reinvent the same stuff in the house in a way that rarely gets out of this wall garden. So the idea is to try to move a lot of this work out in the open so that people can contribute and benefit because there's no point in everybody reinventing the same stuff all the time. And also trying to advance the state of the art, make it easy to have a space for people to experiment, to try new technologies, see if things can work out and then generally improve things there. Yeah, I think that's great. I think it's been really beneficial to have a lot of the enablement that would normally go on behind closed doors happening out in the open and in a way that everyone could take advantage of. Michelle's saying it's time to switch to Chrome. So it is. I saw Michael's request show up for moderation and I clicked on it to approve but I guess it didn't sync up with his browser or something. So hopefully we'll get this fixed. Yeah, there's some. Can you actually see my camera? Yeah, I can see. You'll find something is wrong then. Yeah, I don't like I saw the request from Michael come in and I approved it, but then I guess it didn't go back to his browser. Yeah, this thing works way better if you use Chrome than Firefox, at least for me. So then I think the next thing I want to talk about is where are we in turn? Neil, you might be able to say some things about this. Where are we in terms of vision? What are some of the things that you're looking at in the hyperscale that are super exciting to you? Yeah. So, you know, for those who don't know, since David just kind of threw me into the water here. I'm a member of the hyperscale SIG as well as Fedora Cloud Working Group and a whole bunch of other things here and there, but like a big part of what I'm doing in hyperscale is actually around enabling easy usage of like the most loud. Something's making some noise in there, Michelle. Yeah, you got it. All right. So what I'm focused on in hyperscale is trying to enable easy consumption of the technologies that we are providing and the stuff that we are playing with and we're working with where it would ordinarily be very difficult for someone to set it up themselves. So like my first focus has been on the hyperscale workstation, which, you know, there's a bit of a buzz around that last year when the hyperscale SIG spun up and then like midway through the year I started launching the experimental workstations based on CentOS Stream 8. The idea around that was some of the stuff that we're doing like RPM cow, some of the Butterfes, fancy features, things like that. They're a little hard for people to set up manually from like a plain vanilla CentOS system and then go to that. So like I wanted to bring in those defaults from Fedora and extend them with some of the stuff that we're working on. So people can see how the complete picture looks from the beginning because like I can tell you that Meta is almost certainly doing, you know, provisioning from the beginning with this setup from the get go. But you know, everyone on the public side doesn't have access to their images. You know, I want to basically build open infrastructure for people to consume this content. And as part of making these images like the hyperscale workstation, eventually the hyperscale cloud image that I'm going to have these images are going to be designed with all of the information on how to produce them to be accessible to everyone. So they can reproduce themselves. They can tweak it for their own needs. They can adapt it for their own circumstances. Like so, for example, right now for our, our, our installer presets, like we do a butterfly sub volumes for root and at home. And eventually I'm going to add var once, you know, the requisite tweets are done and we can like make that kind of work. But maybe somebody wants to have root, the root user as a sub volume or maybe they want to relocate, you know, something else that they want to add another setting or whatever. Right. The, the fact that all of the infrastructure and all the tooling and all of the instructions, the total blueprint of how to produce all this stuff is given to you. You can take that stuff and tweak it for your own needs and be able to adapt it for your use case. So like, for example, I'm thinking of, you know, I was recently involved in a mailing list about VFX and CG and animation, all that stuff. And they're, they're increasing on the Linux front, but like their ability to like customize, standardize and deploy at scale is super limited because they don't have a blueprint for doing this. And so what I'd like to do is, yeah. Yeah. And they used to, they, so the, the, the result of, you know, in previous years with CentOS was that they would just leverage a very stable environment to say, okay, this is not going to change for a long time. But then they would, they would say, okay, take a kernel and then force that, you know, force that to work for a, you know, a much longer amount of time than, than we would normally want, right? That would, that would kind of, kind of be a security compromise or create a complication that made it very difficult for us to, yeah. Yeah, that's the thing. Yeah, like a big part of the blueprint here is I'm making this, I'm actually designing this in a way that like, so if those studios say they can't run CentOS, they need to run rel or they need to run Fedora or whatever, right? Like they're, they're running for a very specific use case, or because they have support arrangements or whatever, like say they need to run rel, but they want to use a lot of the stuff that we're doing. Well, something, these, these blueprints can be trivially reconfigured to work with, say, rel or whatever. And, you know, I've been making overtures to other SIGs in the CentOS, in the CentOS community, like the Kmod SIG. If somebody, if any of you are on Twitter and following me on Twitter, you may have seen me tweet out a picture last night of my desktop running vanilla CentOS 9 with ButterFS enabled and working. And the reason for that is I want to give people the opportunity to use whatever supportable configuration they need and be able to leverage all the cool stuff we're doing, basically to be able to generate what they need, have the blueprints, because in a lot of places, you know, a lot of places like Datto and Facebook and Twitter and all these other places, they take for granted that they have the expertise to do and Amazon and AWS, right? They take for granted the fact that they have the expertise to pull off all these custom things to be able to make these decisions about how to assemble these images, how to make customizations, how to layer on their own stuff, how to be like, that stuff is not simple for most people and giving people blueprints and simple ways to take advantage of that for the workstation, for the cloud, for even bare metal servers eventually, if I could ever figure out how we're going to do that. That's great. And that's what I'm focused on. Yeah, I think that's fabulous. I mean, so, I mean, and it brings up something that I think is really great, which is that that compliment to the Kmod SIG, right, that I think that is a huge, huge complimentary experience is to have the kernel modules. What's the, what's the, do we have any goals around continuous integration? And, and testing for the, for the, for the, all those packages and the associated kernel. Sure. Yeah, yeah. I think Davina has some more details about some of the specifics. From a general perspective, I'll say, we eventually want to get to a point where all the blueprints I'm working on, all the configuration work I'm doing on, if it can't be upstreamed into sentos itself, like we've done a lot of work with pipe wire, wire plumber, stuff with engine X, like, if you read the latest course, in the quarterly report, you'll see that there's a whole bunch of stuff between workstation and server stuff that I did. If they can't be integrated there and we've got to hold it on our end. We want to make sure we can continually test it continuously, rebase it completely, continuously upgrade it. But Davina has some specifics that he could probably talk about because he knows that. Yeah, for more, for the hyperscale content specifically, we're trying to leverage the sento CI as much as possible. And the sento CI provides us, among other things, an open shift environment that we can use to spin up basically arbitrary containers. So right now we're using this for doing daily builds of system deep based off using our packaging, but based off the system to get master. And what this gives us is that whenever something breaks, because there was a change in upstream system data and new release, we get, we know this way ahead of the new release and we can adjust our packaging as needed. We want to extend these two other packages as well that we have. We started rigging up a similar setup for the kernel, but we haven't quite wired it yet. The other thing I would really like to have at some point is a way to deploy what we build over VMs in a CI environment so that we can boot a VM with our kernel with our system being a signal on what works and what doesn't. So for example, for system D1 thing we maintain is a set of IC Linux rules to make sure that the recent system D back where we have works as well as issued on a sento CI system with AC Linux enabled. I've wrote a bunch of those. I don't actually run AC Linux, I know very little about AC Linux. So it would be real nice to get automated input whenever those grow it so we can fix them because otherwise we're just going to find out the release time and then I'll have to scramble and I grab Neo for help. Because it's the only one that actually knows how AC Linux works here. One day Davida, you will actually gain the knowledge too. You can't lie to me. I saw you figuring it out. Eventually. But yeah, so there's definitely things there. The other thing I think would be really nice to have is because in the SIG we have a number of packages that can track ahead of what's shipped in sentos. Those are relatively easy to maintain because whenever it has updates in sentos, our version is always going to be ahead. But there are some that don't necessarily track ahead. They will be the same package that's in sentos, but with modifications applied on top or the package tracking closed. Like for example, we have a part of the packaging stack that we used to test features like the RPM DNF copy and write enablement work. And whenever DNF or RPM is updated in sentos, we don't have a good way right now to know that we have to also update our back port. So one thing I would like to have is some automation to figure out this package was updated within sentos. Either file a ticket for us or ideally try to do the release automatically, kick off a build and get signal from it. Yeah, that's a great idea. Yeah, we have tickets for all of these things on our issue tracker. If anybody's interested in working on them. I actually spent some time yesterday cleaning with them. I noticed I saw quite a few things coming across in the in email. That was great. So yeah, I think that's just that's super exciting. I was listening to Steph Walter talk about the error budgeting in his talk earlier. And and it seemed like that was a this is a place where we could really truly benefit from that. So yeah, Neil, I'm super excited about about metal, right, having full metal support. But yeah, the challenge here honestly is in the sentos infrastructure and tooling that exists today, we don't actually have a way to produce an install DVD, like at all. The install DVD requires custom tooling, a tool in folks in the Fedora and Ralph and the Red Hat ecosystem would know of as punji, which is the tool that actually like goes through and walks through the build system, collects all the stuff to push out and makes the repos, but makes also the install images and makes and composes all that stuff and pushes it out to the mirror network. That tool is the only tool that can produce the installed DVDs, you can't produce them any other way. And when you're working from CVS, which is the sentos community build system, the repositories aren't produced with the necessary metadata for pun for a punji run to work on it. So we cannot produce install DVDs, as it currently is, I am thinking of trying to I am experimenting. I'm experimenting with the idea of being able to create a net install ISO, and seeing if I can make that work by perverting the use of the live media creation tool. Because it is actually very hard to produce this media when I don't know how they're made. So I want to add, so then now I'm going to add something that's kind of mildly controversial. But we don't have any way to do this outside of punji, but isn't OS build supposed to provide some of that ability? So the problem is, so it is supposed to, but I don't see any functionality built into it for it, like I've looked at the code for it. What it can do is produce a file system tree and then create a simple installer that will sync that file. Basically the equivalent of a live installer except way dumber. And that's not terribly helpful. I want to be able to give people like, all right, there's a collection of packages in here. We've created the install DVD with the built-in repos and whatever. And then they can install like the traditional way with the kickstart and all that jazz. So what I'm trying to do is I'm trying to leverage the same stuff I do to create live CDs, to just create a new live environment that all it does is boot up Anaconda, Internet install mode. And if I can get that to work, then I basically I'm going to say I'm going to throw my hands up and say this is the best you get. Like, because I can't do any better here and have a way for people to basically do automated provisioning in the same way that they would do with official rel and sent off media. Because, you know, that's I think the real gap we have right now for like certain types of consumers. Because there isn't a really, there isn't a great way to use cloud images to install onto bare metal. At least not one that I know of. Yeah, yeah. Well, okay, there is a couple, but they're not packaged in Fedora and in Apple yet. Although, if someone's interest, for example, I think there's a tool called mass metal as a service. And I think one of its defining qualities is that it uses cloud in it to provision bare metal systems. And it supports sentos in the open source open core version. If someone were to package that in Fedora and Apple, we could actually put that as part of our hyperscale roadmap. Yeah, absolutely. Hey, Thomas, thanks for joining us. Yeah. Did you have a question or something you wanted to talk about? No. I'm just trying to figure out what's going on. Well, yeah, just trying to talk a little bit about what's going on in the hyperscale world. And then I thought, you know, we've got a few minutes left. Would love to talk a little bit about cloud too. I mean, so obviously, you know, all of this work ends up giving us virtual machines. Those virtual machines function in many different ways in the, you know, in cloud. But also we have, we have a couple of groups, you know, a couple of working groups for that as well. We've got the CentOS cloud image group. And then we also have the Fedora cloud group. We don't have a Fedora hyperscale group just yet, Neil, do we? No, no, we don't yet. I think Michelle and I, before the summer, Michelle and I will wind up creating a Fedora hyperscale. I don't know what we're going to do yet there, but I think it's going to exist simply because we already do as part of a lot of the CentOS hyperscale work. We do a ton of work in Fedora. And it might make sense to incorporate that into a SIG anyway for organizational purposes and to just kind of clarify where some things are going on. Because right now it's like kind of spread out everywhere and it's just kind of happening randomly. And bringing that together in CentOS was clearly good for us. It might also make sense for us to do that in Fedora too. Yeah, I think very well could. One of the things that I thought would be really helpful is being able to produce some much more upstream versions of some of those modules. So having that as sort of a pre-release would be ideal is to know where the problems and the bugs are in the Fedora releases versus trying to determine what they are after we'd expect them to be stable. Yeah, I could easily see us maybe as a starting point. The Fedora hyperscale could just be the analog of CentOS hyperscale that does everything against, for example, ELN and raw hide. Because one of the things I want to avoid in Fedora is I want to avoid us having more special variants that are just doing these things. I want the technology we're doing in Fedora to be integrated into the distribution proper and shipped with everyone and getting the maximum amount of value for the community. So the focus would be a little different than what we do in CentOS. So speaking of ELN, I think it'd be really interesting to try and do like a hyperscale version of a ELN and see which parts of our stack still work on ELN and which parts stop working. Because I think that would be something I'm personally interested in because I want to use ELN within Facebook as a way to get like a preview of the future of what might be coming down the pipeline to us so we can plan ahead. But it's something that I think would be generally useful in terms of like, oh, we found out that this change that was applied in Fedora 38 or something is impacting our backwards or our kernel work or our whatever. And then we can like either course correct on our side or we can provide feedback on the Fedora side and I think everybody benefits from this. Yeah, you can just make Michelle run it. Yeah. Well, and also write this at the end of last year, Amazon announced that Amazon Linux is based on Fedora now, right? So Fedora, and while I'm hoping AWS will participate more and pull more from Fedora Cloud into Amazon Linux like crossing my fingers here because we did a lot of good stuff at Fedora Linux 35 that I want to see in Amazon Linux 2022 final. Yeah. But it might actually make sense for as part of Hyperscale, Fedora Hyperscale, we actually bring that those configurations in as well and make sure that those things are actually like being validated. Well, so that brings me back to the concepts around the cloud, the cloud sigs. And so one, I want to state that I think that as a cloud sig as a working group, we have a responsibility to the ELN guys to create some images for them. And I think that that's something, you know, it's definitely on the radar, you know, looking at Fed image. We got to get rid of that. Love, love, Cyan and everybody who worked on it, you know, in the in the past, but, but we got to get rid of that. And, hey Troy, great to see you here. And, and just want to shout out for your support of Carl George when he phoned a friend the other day on the Yeah, that was great on the stream. That was fantastic. And so, so I think we have a big, I think we have a big, a big responsibility to ELN to get their images back, you know, up and cloudified and make them a much more important part of that image space. And, and I also think that we have this. We have a lot of work to be, you know, to be done around around making those making images available and the modules available and getting a better test bed all just all the way around. So I'm super excited about that. I really love to see ELN be a larger focus. Yeah, Mohan, you're getting some great board games here. I just want to say, Neil, by the time I joined you're talking about punchy and trying to generate some images and stuff. I don't know what is it about. So if you want any help, just let me know. I'm more than happy to help. Yeah, I will probably want to talk to you about install DVDs and net install isos. I think they're called boot isos and whatever. Yeah. Yep. I will want to talk to you about those because want to be able to make those in, in, in CentOS. Maybe we can get him, maybe we can get Mohan to come in on the stream. Oh, go ahead, Michael. Yeah, I just want to say I might be able to interrupt for just a minute. We're approaching the last official minute for this session. So, you know, I just want to thank David for, for hosting this great meetup. And yeah, thanks everybody for joining to this. It's awesome discussion. Super. David, who do you know, who did all the masterables in CentOS stream? Did it all the way? Oh, okay. Masterables in CentOS stream. I'm already in the stream. Yeah, he's been, he's been hitting the hammers all the time in CentOS stream. Oh, yeah. So that's fantastic. And I know that, you know, everybody who's doing this has got, you know, their, their day job. So always super happy about everything that gets done. And Michelle, you made a point that we should talk a little bit about the multi-lib. I think that's, that's an interesting topic. You want to, you want to start us off? Yeah, like, we just came up recently that basically we built in CBS. CBS doesn't have basically it only builds it is 64. And basically if we want to ship a newer package in a hyperscale that then something in CentOS, basically we lose multi-lib support. Anyone who installed that and it's a little bit libraries will be basically, well, you cannot use this package. Yeah, we should be able to fix that because I think CBS actually can pull in the Koji build route from stream. So that lets us, that lets us bypass that particular bit of ugliness if we, if we try hard enough. It does. But we'd have to talk to Mohan and Fabian to figure that out. We can at least. Yeah, we've got a pile of things at this point. I think we're out of time. I believe so. Yeah. Yeah. So there's another one parking in a few minutes. So we should probably leave on their own. Yeah. Absolutely. Yeah. Appreciate leaving a little extra time. Oh, sorry. Go ahead, David. I was just going to say thanks to everyone for coming and participating. This has been super fun to just have a few minutes to talk about this face-to-face and kind of on the record. So look forward to the next time and maybe even doing this for, for, well, I'll remind people that there's the CentOS Hyperscale has a meetup. And so we get together and do this over teleconferences from time to time. So come and join us. Yeah. We've got a monthly thing and we have a matrix room. Pound centos-hyperscale colon fedoraproject.org. We have weekly meetings and IRC. Although we will eventually move that to matrix once things are figured out. Monthly meetups on the video call. Go check the CentOS calendar. Yep. And same for Fedora Cloud and Fedora Sigma. Yeah, thanks. Thanks everybody. And next year I would definitely recommend scheduling an hour for this discussion. Right on. I think Tomas gave us what he had. All right. Thanks, guys. Bye-bye.