 So my name is Brian Stinson. I work on the CentOS project. Normally I'd be working on some of the CI infrastructure and things like that, but for the past few months I've actually been working on building CentOS 8 just as part of my day-to-day job and stuff. So I'm talking here about building CentOS with familiar tools, but to do that I think it's probably, of those of you in here, how many of you have used CentOS on your system once or twice? Yeah? Okay. To talk a little bit about some history and kind of what we do as a project, we take the sources that come as part of the source delivery process from Red Hat, from Red Hat Enterprise Linux, and we rebuild those sources and produce a free distribution called CentOS. If we go back into ancient history about why this all came about, that'd be an interesting discussion, but not for this particular talk. But I brought that up to say that it's basically a huge rebuild operation that we're going through here. We're taking sources that have already been produced into an operating system and recreating it out in the open. And I guess you could call it for the contemporary history of CentOS. And by contemporary, I mean going back to the CentOS 5 days, 4 and 3 are also pretty similar in the patterns of the ways that we built the distribution. But typically in the CentOS 5 and 6 days, this was kind of our mode of operation. That guy in the middle of his head kind of got cropped out. That's Johnny Hughes. He's been building RPMs for CentOS for a long, long time. But basically we feed Johnny coffee and source RPMs and outcome CentOS. And for the lifetime of those two systems, this was a pretty manual operation, is kind of what I'm going here to say. Building CentOS even today has some really kind of bespoke operations to it because our mission is to go out and recreate sources that have already been produced. So for CentOS 6, the sources came from ftp.redhat.com. Those were in source RPM format posted to there. We have some triggers that look for new content at ftp.redhat.com and run it through our build system. So REMZULE is the build system that came out of the CentOS 6.0 release. We found that, and this is kind of why I talk about CentOS 5 and 6 being the sort of the start of the contemporary age of CentOS, is because we kind of got away from local mock builds on a build farm somewhere to actually scheduling builds in a separate build system. And that started with CentOS 6. REMZULE is a collection of, is basically a beanstalk queue, if you're familiar with the beanstalk messaging system, and a bunch of Mockcher roots in a build farm. You know, obviously new builders for that sort of thing. But it's relatively simple compared to what, even what was going on in the Fedora space at the time that these distributions started. And it got a little more complicated as things went on in the CentOS 7 days. This was around, you might remember the Red Hat partnership with the CentOS project, a bunch of the developers for CentOS were actually became Red Hat employees around this time. And we started some extra activities in the project besides just producing the CentOS Linux that we know and love. We added the special interest groups. Those were focused at projects that were layered on top of the operating system. We started with folks like RDO, did a distribution of OpenStack on top of CentOS and Gluster, Ceph, folks like that. And they're still delivering content to this day using a Koji instance that we stood up for that purpose. And again, this is, you know, this is kind of a tale of 15 build systems or so because we've maintained the Remzool build system that we have for 5 and 6. Remzool is also used in a separate capacity to build the CentOS 7 distro. We have the Koji build system and we just ended up retiring, you know, a few months ago. We had a plague set up to help us with an Army HFP bring up because traditionally with, you know, some of these other things that we've been talking about so far, bringing up an extra architecture is kind of hard to do. But with CentOS 7, the sources, Red Hat started shipping the sources to, the rel sources to get.centos.org. So that's a, you know, that's a place for them to dump all of their packages, but it's also the place of record for what goes into rel from a source perspective. And we actually rewrote Remzool. It's open source. You can find it on GitHub under the CentOS organization. It's, like I said, it's a simple beanstalk thing and built around some macho roots. But even in this, in this process, like the workflow itself really didn't change a whole lot. We're still back to, you know, handing coffee and source RPMs to Johnny and then Outcomes CentOS because the source layouts in get.centos.org are basically take a source RPM, explode it out into the directory that you're used to with SRPMs, specs, a couple of the metadata files in there and then just check it in to get. And this is sort of the handmade nature of the way that we've built CentOS in the past. It'll kind of give you a clue as to some of the problems that we've had. And so some of the problems that transcend releases, we've had these problems, you know, from the day one. This is things that we just deal with as a matter of course when building a CentOS distribution. Each release has its own individual needs. CentOS 6 is probably, I would say, one of the ones we have to spend the least amount of time on catering to its individual needs. It's pretty straightforward. We build the source RPMs and it comes out. CentOS 7 actually grew the need for us to build different parts of the distribution with different tool chains. So if you're familiar with the developer tool sets and stuff that ended up being shipped in the middle of the 7 release stream, we had to kind of tweak the way that we build some of the packages and leave the other stuff alone. One of the other problems and we've found this in the 8 series as well is we don't always get what I call the filler from Red Hat. And this comes in the form of intermediate build routes. You know, the pathological case is you have a particular library or an application, you know, has a 1.0.1 release that requires 1.0.0 to build. When they release 1.0.2, it itself requires that middle build but Red Hat may or may not release that depending on where it lands in their update cycle. So we may not get that intermediary package that was used to build something that they've produced and we have to kind of go and engineer what was in the build route at the time when Red Hat built it so that we can go back and kind of inject those things in. And that's part of why the process for CentOS has been so manual. And it also speaks to some of the requirements that we have that don't necessarily shoehorn nicely into build systems like Koji. For example, we regularly rebuild particular binary RPMs and use the same NVR against different components in the build route depending on what we find about the libraries that were in there at the time. And Koji doesn't like it when you do that. There's a couple of operations you can do that will let you do that but some of the other tools end up breaking if you do that too often. Time bombs and other package craft is a real problem. So I don't know if they just think it's funny or something but a lot of far upstream developers will put in tests that include a certificate or some sort of date check or something like that that gets committed to the package and run as part of percent check. And that is brutal for us because there are times when the certificate will be generated at the beginning of the RHEL development cycle and it will expire before the actual RHEL release has gone out. So RHEL has actually built it. It worked just fine in their build system. We go and try and rebuild it in Santos and the certificates expired. That's a huge problem that we find in more packages than we should. But then by the time we get to even a .orelease especially, the Fedora release that a RHEL major release is based on is usually two releases ahead at this point. So if you take RHEL 8 for example is based on Fedora 28. RHEL 8 released on May 5th of this year. Fedora 28 went into life on May 28th and we're still in the middle of the build process here for Santos 8 and we typically find that we need to go back and sort of do some archaeology to find things that satisfy dependencies or to use as part of our base infrastructure to actually get this stuff out the door. Because we don't, especially for the older releases, we don't really pay much attention to what goes on internally in Red Hat just to sort of keep some separation there. With 8 we're relaxing some of those rules. It's a little bit easier for us to operate as a team and figure out what Red Hat was doing at the time. But still we find ourselves going back to Fedora quite a bit and we're going back to an end of life distribution to kind of get some things bootstrapped and that can be a problem. So Santos 8 was an opportunity to automate some of our processes and we've always wanted to sort of abuse the tools that Fedora and Rell use. Just as part of the family it makes it easier to talk about things with other developers when we can hand them a Koji build and they can see the output of what happened at the process besides going on to the rest of our logs and trying to dig through the way that we've done things before. And Santos 8 was kind of a breaking point to allow us to reduce some of those problems that I was talking about before. I'll talk about those in a minute, but Santos 8 also came with a few motivating problems. And mostly it was around modules and this is not about the usability of that as an actual consumer of the operating system, but more of how we're going to build these. And that turned into a particular challenge for us when we're actually trying to recreate what's going on after the fact. And so we did allow ourselves to relax some of the rules. So for Santos 8, module NSVCs can differ from Rell. The name and the stream are always going to match, but the version and the context are generated by the build system and that's just something that we're allowing ourselves to relax a little bit. Now, I will say that we're not relaxing the restriction on RPM, NVRs and base OS. So all of that's going to maintain the same NVRs as you know and love with Rell. But relaxing that modular thing made things a little bit easier for us. And separating the implementation of the build system itself for, typically in the past we've sort of run everything, including a bunch of our alternate architectures. We run through the same build system, same time, Johnny would go and drink his coffee and submit a whole bunch of builds to the same system, but we've allowed ourselves to separate those implementations just as our policy changes a little bit. There's some interesting things going on in multi-arch for the Rell 7.7 release, I think. And we're kind of having to figure out what that's going to mean for us. What our policies are going to change, especially as we go through the 10-year life cycle, and separating out some of the build systems in different places will let us kind of manage things a little bit differently. And so we came up with a tool called Mbox. And really all that this is, it sounds fancy and like it's an actual thing. But basically it's just Koji, MBS, the Koji Hub, Koji Web, the module build service, and a couple of helper services that can be deployed in an OpenShift namespace. So we containerized the Koji Hub and MBS. That way we can, there's a couple of things that you can get from that. You can use some of the storage layers that are in OpenShift to actually manage your volumes and things like that. But you can also administratively separate things that need separate build systems, but you can consolidate a little bit of the management. And I'll talk about how we did that in just a minute, but I wanted to put up a link. This is sort of the upstream as it exists right now. And yes, there are two Bs in that repo name, if you want to take a look. This is something that Patrick wrote quickly for us while we were sort of scaffolding this out. So how did it go? We heavily modified our instance of inbox just as we did the day-to-day process of building eight. We ended up needing config changes that were different from the defaults and things like that. So we've diverged a little, actually quite a bit from upstream. But we started with two instances. For CentOS 8, our primary architectures are X8664, PPC64LE, and ART64. We have a community member who is really interested in the 32-bit arms space. And we gave him a whole Koji and MBS setup so that he could do ARMHFP for us. And there's a link to the primary one. This is what it looks like, you know, the Koji interface that you know and love. This is something new for us, too, because typically for the seven release, we posted all of our build logs and things to an Apache instance that we have. And folks could go out and look. But they didn't have this nice interface, you know, relatively nice interface compared to digging through Apache indexes and stuff, of looking at what we were doing at the time. And this has been, you know, kind of helpful. We're still ironing out some of the policies with our QA group who kind of controls the release process of CentOS. So we're not exposing the artifacts of the images that come out of Koji, for example. But we want to expose the RPMs, the logs, the other collateral that runs through the build system as we go. And that brings up another point. Since we have a Koji, it's easier for us to use existing compose tooling like Pungie. And it lets us build entire trees early. So for the five, six, seven days, you know, we're building things by hand, the trees actually didn't come. The repositories with the correct metadata and in the right places, that actually didn't come until very, very late in the process. I would call that, we did our repo structure as part of the RC process even before we pushed it out to the mirrors. And that also means that we did Anaconda like last whenever we pushed out a release, which is entirely the wrong place to put that for this distribution. Because a lot of times we need patches for de-branding and things like that. So this let us build entire trees early. Basically, as soon as we had something that would close dependencies, we can generate the entire trees and the images that go with them. And that helped us quite a bit with the QA process of running it through our test harnesses. You know, we check some of the metadata, things like that. And having cloud images from day one was really nice to be able to spin up in some of our KVM environments that we have for testing. So what's next for Mbox? We need to reintegrate some of those customizations that we did, the random patches to the config files and updating the images and things like that. We need to work on getting those back upstream. We've been, you know, kind of heads down in the build process. We haven't really done much in the way of upstream contribution in that aspect. But that's coming. We plan on deploying more instances for other projects and architecture bringups. We don't know what those are going to look like yet. But I don't know. There's rumors of folks that are interested in other architecture projects and things. And Mbox would be a great resource for them. The community build service, I mentioned we have that Koji instance for our special interest groups. We're looking into deploying an instance of Mbox for that as well. Just to sort of centralize the management of these various instances. The next is to kind of stabilize the services themselves in Mbox. We caught a couple of things, you know, just as things went on. Because we had an open shift, it was nice to be able to iterate on those containers and kind of update them in place. Because we actually caught a few bugs in the module build service as it updated. And because it was an open shift, we could just roll back to the previous image and go from there until we got that fixed. So the last thing is to come here to flock and to talk to other folks who might be interested in this particular pattern of, you know, it's almost a build system as a service pattern that folks can come, you know, I'd like to collaborate to get this into Fedora infrastructure as well in case that's a useful pattern for folks. And to really see if we can turn this into a general purpose tool that we can use for both distributions. So what's next for CentOS? We need to finish the QA process. We need to build and test. This is for CentOS 8, that is. We need to build and test the non-zero-day updates. So if you're familiar with how things work, we've had one batch already that's been posted and I think another one's coming pretty soon. But we've got all of the zero-day updates for 8.0 are already built. And we need to update our CI infrastructure and some of our other services to include the C8 images just shortly after we flip the bits over and release everything to the mirrors. Yeah, that's all that I have for today. And to answer the first question, we are going to release CentOS 8 whenever it's ready. We don't really know when that's going to be yet. But we've got the builds done and the QA folks are doing a really good job at getting that tested. So that's the information. What's that? No, most of them are, you know, out remote somewhere. And typically those folks just engage with the CentOS project. I don't know many of them that do much over here in Fedora land. But yeah, same tree, different branches and stuff. A lot of them do focus really heavily on the CentOS side. And they do quite a bit of work for us. That's really helpful. Yeah. So CentOS is the elimination of our demos. We don't have CentOS workstations. There should be no surprise, no developer right now actually likes that for account either. So CentOS 8 is going to be kind of a continuation of trying to have everything just be in one readbook. And if so, how are you dealing with models? Yeah, so we do have the base OS and AppStream split. And, you know, the way that Red Hat is working on that now, they're kind of turning that into the different editions and variants and stuff. They're turning that more into a reporting problem than a content delivery problem. So that's, I don't know what they're doing with that. The plan is to, I mean, basically we include the exact packages that are in REL base OS in CentOS base OS and then the same module streams are in AppStream and then also the non-modular RPMs are going in AppStream as well. So we're maintaining that split. And then adding on our extra bits, we have CentOS extras and CentOS plus that we'll also be maintaining in the 8 release. That's for just things that we add on that is never going to be relevant to the REL ecosystem like our CentOS release packages for the special interest groups. If you want to install Gluster or RDO or something, they put a CentOS release package in those repositories that you can enable and get that content. But yeah. Inbox, the name is for actually, I think it's an expansion of module build in a box because the original goal was we found out that we needed to do modules and didn't really have a good way to do that in a test environment. I'm sure there are many things that... Plenty of names. Got to pick one. There's two Vs in the repo name. That's right. That's just a typo. Yeah. Something to win in our DNS. Yeah. Yeah. They do. Are we recording? Do you know? Okay. I'll tell you a couple of typo stories when we're not recording. Okay. There we go. Other questions? Supporting build packages, they gave you the sources. Will there be a possibility to enable that repo and install them on the machine? Yeah. We're not actually composing build root. So if you're a REL engineer, we're not composing build root as a separate thing. Those RPMs exist in Koji in our inbox instance, but that's not something that we're really interested in composing as a release artifact. No. So there's a set of packages that REL has used in the actual build routes to build individual RPMs. And Red Hat is not shipping those to REL just because they're build root only. They have a separate life cycle and there's not a support engagement with that particular repository. They shipped the sources for that particular repository to get.centos.org. We actually needed all of those packages to build Centos 8, but we're not taking the build root only packages and turning that into a repo somewhere that you can enable because really they're only useful to build the distribution itself. Or they're only used to build the distribution itself. I'm not going to comment on the usefulness or whatever, but yeah. The main thing was like a need tool that gets launched from a main file that is just programmed to run to boost an output file. That's only needed at build time, but it's still needed. Internally, this is difficult. It also exists in the Nebula's Koji space. I don't think that's a good idea, really. I think Centos might want to reconsider that. At least making it available as a separate repository, but not necessarily bringing it in. It's another thing where we don't, for some of the same reasons that Red Hat doesn't want to support that as something that they ship to customers. It's something that we don't necessarily want to, like we don't want to promote that as a thing that we sanction as a project just because of the limited guarantees about security fixes and things like that. On my laptop and doing stuff, it really does suck as an experience, but from a project perspective, we don't want to ship things to our users that has ambiguous support or ambiguous life cycle stuff. That's the reason behind that. Anything else? All right. Thanks, guys.