 Okay. Let's go ahead and get started. So my name is Brian Stinson, and I'll talk a little bit about why I'm wearing a CentOS shirt here at a Fedora conference because we've actually done quite a few things over the past year and a half or so actually to collaborate on a number of different things. One of them is the Fedora CI project. I'm here to talk a little bit about some of the processes that go into Fedora CI. I'm going to give you a little bit of a background in why it exists, and talk a little bit about the infrastructure and the hurdles that we had to get over in order to get a lot of this stuff implemented for the Fedora process. So I think it's probably good to give a little bit of background, and I apologize for these slides because they are misaligned, but that's okay. So what is Fedora CI? It's a number of different things. It's a Fedora objective. It's got a number of people who are tasked to work on it, and a number of people who volunteer their time to both to add tests and also to give feedback on some of the infrastructure components. It has a set of infrastructure. We do have a good set of hardware behind the project, and we do have a place that you can actually go through the process and try it out, and it's a process. It's targeted at the individual packages that go into diskit, and the background behind this is we make check is pretty awesome, but it's not quite enough to... Once you actually build a package and get it through the system, install it on a machine, you kind of want some other things to go on after the fact. Make check is kind of a good process to run in the build system. Maybe you've shoved some things into your spec file in Make Check so that you can kind of get some initial feedback on the quality of the package, but really the build system shouldn't be composing and a new thing, adding your package onto it and doing a whole bunch of integration tests, because maybe your package is pretty complex. Maybe you need to set up a number of different systems in order to do a full end-to-end package, or maybe it's just the fact that you want your package builds to complete a whole lot faster and shove the tests on down the road, but we can talk a little bit about the objective. We talked about the Fedora CI Initiative a little bit at Flock 2017, and I think it was made an official project of objective shortly after that. The background behind this was there's a group of people that wanted to figure out what continuous integration looks like based on both pull requests to diskit, which we recently added in, but also to what happens when you actually push a branch to diskit and do an actual build. And there were a number of challenges that the team wanted to kind of overlook at the beginning, and so I'll talk a little bit about the history. There was a group of people that was interested in what it would look like if, after a commit to diskit happened, if we just recomposed an artifact, including the new package. What would it look like if we built that package, recomposed the operating system, and then did tests on the operating system itself? And so we started with this really self-contained system that was sort of a side process alongside the traditional Fedora package workflow that no one loved. And so what the pipeline did was very focused on the Fedora Atomic package set because, well, for a number of reasons, but one of the nice things that the Fedora Atomic package set gives you is a limited, self-contained package set that can produce an operating system. And you don't have to go through, if you imagine you're building the whole process from scratch and kind of thinking about things as they go, if you use the Fedora Atomic package set, you don't really have to reinvent a whole lot of things until you do. I know that's kind of confusing, but it's really easy to actually build an RPM, compose an OS tree, and then test that artifact itself. And so that's why we started with the Fedora Atomic package set. It was kind of a nice way to keep things isolated while also giving us a good set of packages and tests to try things out. And so the project kind of grew as things go, because we started with kind of a small group of people who are looking to reimplement a few processes to try some new things related to the build and test process. But it turns out that we have quite a few moving pieces to take care of. And to kind of echo the sentiment and the keynotes earlier, most everything in Fedora is about people. And that's very true of the Fedora CI process as well. There are a number of different teams who work on it full time for their job, a number of volunteers, and a number of different groups who all have to collaborate together and who have done quite over the past couple of years to make this happen. And so I tried to kind of capture this in visual form. And I am leaving a ton of stuff out, even to get to this page. So this is a number of different teams. And I kind of grouped everyone by a team, but it's not necessarily folks who are working full time. These are all just different groups who are responsible for different pieces of the infrastructure. And it doesn't look like this slide fits on the screen here. But if you see what you've got, the Fedora CI thing here in the middle. And this is a number of different folks who all have a piece of infrastructure, a piece of software that relates to the Fedora CI initiative. So you've got the QA team down here who manages Taskatron and ResultsDB, which are important for getting results in from all the various places. You've got the Fedora infrastructure team who manages the actual systems that you interact with, Kojid, Diskit, Bodi. You've got the Factory 2 team who is working on some of the decision models and the software that helps Bodi make decisions about, or is supposed to help Bodi make decisions about gating and things like that. You've got the Contra CI CD team. Those are the folks who actually wrote most of the Jenkins libraries that we'll talk about later that operationalize the Fedora CI pipelines. And I'll show you how that stuff works, and we'll talk about that here in just a minute. You've got the Fedora CI and the upstream first teams. Those folks are focused on the standard test roles. That's another thing we'll talk about here in a little bit. It's the wrappers and libraries that help you actually either wrap existing tests or write new tests. And they're also working on actually helping individual packages get existing tests from various places put into Diskit on their own packages. And then there's me, who is up here in the corner. I work for the CentOS infrastructure team. When the Fedora CI initiative first started out, we were looking around for a set of infrastructure and just a base platform in order to run these various experiments. And luckily enough, in the CentOS project, we'd had some extra capacity. And we were looking to partner with the Fedora project in a number of different ways anyways. And so the Fedora CI initiative was kind of important for us as an infrastructure team because we could provide the infrastructure and also help folks collaborate a little bit by using our hardware and some of our existing resources. So I'll talk a little bit about the infrastructure itself because that's what I'm interested in. That's what I do for my day job and sometimes my not day job. It was important for us during the Fedora CI initiative to kind of rethink some of the existing patterns that existed in CI already. So if you think of your upstream projects that you might be working on, a lot of people are familiar with the existing ways of running a Jenkins master. You have a Jenkins master that lives on a machine somewhere. And you create a number of different jobs that point directly to your repos. It's a lot of stuff to manage. It's a lot of, if you don't do it correctly, you end up with a lot of pointy clicky types of things in your Jenkins master, stuff like that. We wanted to rethink that a little bit and orchestrate some things in OpenShift. And so to give you a little bit of an idea of, we're still kind of a small scale project here. But we can support all of the stuff going on in apps.ci.centos.org. We started with OpenShift Origin 3.5. And we've upgraded a few times in place over the past year. But we are up to about 19 nodes, bare metal nodes, that run various processes across the infrastructure. All of the tests that go into disk it are actually scheduled on different pods in OpenShift. Jenkins actually orchestrates all of that for us in the pipeline. And OpenShift has been pretty critical in spreading around that test load across a number of different machines. Luckily, we do have some bare metal resources dedicated to that. So you can actually spin up VMs in OpenShift if you really want. And there's a lot of things that you can do when you have a bare metal infrastructure like this. And it was kind of a way to let the folks developing the CI pipeline. It was a way for them to consume infrastructure in a way that didn't get in their way. So they didn't have to come to me every time they wanted to try out a new service or a new part of the Jenkins libraries. They didn't have to ask me for VMs. They didn't have to really need any of that. They could. It was a way for us to get the infrastructure out of the way of the development process for them. And that helped quite a bit. So I mentioned Jenkins. OpenShift has a pretty good standard set of Jenkins images that are available that you can deploy in your projects and stuff like that. They started, the CI pipeline folks started with the base OpenShift Jenkins and then ended up customizing it in a number of different ways, which is actually a whole lot of fun for certain definitions of fun. If you're interested in how that all works, we actually do that for some of our other tenants in OpenShift as well. So I'm happy to talk about that. Sort of out in the hallway if you're interested in running a whole bunch of Jenkins masters. Because I think we're up to 20 different projects that we've migrated different tenants in other parts of the Santas CI infrastructure. And yeah, so that's been kind of a fun thing to watch. So this is kind of what the CI pipeline is all about. It looks pretty simple. If you go to, this is a public URL. I don't have it on here. But there's another one later on that you can look at. This is a number of the different pipelines that are targeted specifically at the Fedora CI initiative. So you've got this CI pipeline tab right here that is sort of the initial pipelines targeted at the Atomic Host style pipeline. I'm going to show you a view of one of these pipelines here in a minute. And we'll talk through some of the individual steps. But I did want to show this sort of front page here. Some of the interesting things that you see, we've got the two pipelines for F26, F27 are targeted at the Atomic Host. This Fedora All Packages pipeline, that's something that we started earlier this year, pretty early in the year actually. And that's probably more applicable to you folks here in the room. I'll show a little bit about that here in a minute. But yeah, hitting this front page, it's completely open. I don't think we've really documented this very well about how to actually go in here and find things and how to get results. We'll talk about that as well. And finally, the CI pipeline objects and libraries. I mentioned the Contra CI CD team is working on that. Those are all open source up in the Sentos Pass SIG is where these folks are actually doing their work. So if you look on github.com slash sentos-passsig, all of these pipeline objects and libraries are available to you upstream there. You can check those out. But basically, it's a set of convenience methods in Jenkins that are imported into all of the masters that these folks deploy. And it's a set of convenience methods to go through and run different parts of the CI pipeline based on the tests and things. But at the core for the interface for packages, it's really just ansible. The mechanism for doing all of this stuff is pretty easy if you kind of look at things a little bit. When you're putting your tests in diskit, there's a specification for where those go and kind of a little bit of boilerplate YAML that you put in the file there. But I mentioned that the upstream first team and the Fedora CI, the folks working on Fedora CI, have put together a set of roles that are directly available to you as a package. So this is the standard test roles and the libraries that go with it. They have a number of different methods that help make things a whole lot easier. For example, there were a number of test suites that were tied to a harness called beaker lib. And so when you have a number of packages that are using the same test suite, it kind of makes sense to create a separate role that kind of abstracts a whole lot of functionality for the package or it makes things a whole lot easier to migrate into the test infrastructure. And so they have a number of roles that both you can, there's a role that just runs a script, there's a role that will run your beaker lib harness. And I think they were looking at Avocado as well. But we can take a look at that here because the Fedora CI initiative is a process. And it's something that requires a little bit of work from some folks. And I promise it's not hard or anything. But there is a process to it. I want to take a look at some examples. So this is from lib.mp. This is actually a, yeah, this looks right. Yeah, so this is lib.mp. Under the hood, this is a beaker lib test, I believe. This doesn't look quite right. But anyways, so there's a role in here that you can, this is the entire test.yaml file in lib.mp's diskit directory. So if you look under, if you go to sourcedupfordoreproject.org slash lib.mp, you'll find this yaml file. I'm going to show you a kind of a basic one that I just tried out last night for one of my packages called nvi. I wanted to do the structure in there. There aren't really any tests for nvi because I don't know if you know anything about it. It's a re-implementation of vi from four BSD a long time ago. So yeah, we just, this is the test suite. User been true, which is, I don't know. It's a good test suite for this, I guess. But the idea is that the yaml for this is actually pretty short, especially if you use the basic roles. You can just run a script. And if the script is in your, check the script into your repository there, or you can deliver it as part of your actual package. Or you can deliver a sub-package that includes the test suite that you actually run. So if nvi upstream actually included their test suite and a test runner script, all I'd have to do here is just package that up. Tell it the directory, the working directory of the test suite. And then this run directive right here actually tells you what script to run. So it's pretty easy to write that in basic shell if you're just getting started out. You can actually run the tests on your own machine. So the standard test roles are packaged in RPMs. It's a number of, like I said, ansible roles, but then also an inventory that basically operates on the package that you have installed. So in this, on your local machine, basically as long as you have the package installed on there. So I would, you know, DNF install nvi and then run this ansible playbook script. This is the standard inventory that says, I'm going to use the local machine and not do any sort of provisioning or anything like that. And then give it my test.yaml. It's going to spin up ansible, run user been true, and then say that everything passed. There's a couple of different things that you can do. The tags, let me actually go back. The tags directive here talks about different scenarios that you can run the test under. So the classic cases, you know, just a bare RPM on a system, you can run it in a container context. And there's a number of things that you can do based on the tags that you give your test. One of the cool things about the standard test roles here is it does include other inventories as well. So if you didn't want to install nvi on your local machine there, there's actually an inventory and a roll in there that will spin up the Fedora QCOW 2 image, install your package, and then run the test there. If it's, you know, you don't want to actually pollute your machine or something like that. That's actually what we do in the CI pipelines themselves. I'll show that in the different stages here in just a minute. So I'm going to talk a little bit about what we've done this year. The Fedora Atomic CI Pipeline, that was sort of the first step of doing all of this. Building out the indivips of what happens after someone commits to disk it. And there were some pros and cons here, because on the one hand, it was nice that we had a limited package set and a limited deliverable to work with in the beginning, because that makes things easier to develop. But when you have such a limited package, it doesn't reflect the entirety of packages that you can address. It doesn't include the entire Fedora Packager ecosystem. And that was kind of unfortunate. The Fedora Atomic CI Pipeline, again, I could have put all three of these in the pro and con section, because the pros, all of this actually makes it easier to get started with. But then we have to iterate on that to make it actually useful to the general Packager ecosystem. Because the Fedora Atomic CI Pipeline does run its own builds. It doesn't call out to Koji or anything like that. It's got a separate process for that. It runs its own test composes to do its own tiny little atomic host to do tests on. And it operates after the merge happens. So you don't really get results on this until after the push has already happened to disk it. So this is the original pipeline. This is the Atomic CI Pipeline. So you can see here, this is what packages this. Vim for Fedora 27. There's a number of different steps here. And what it actually does is it performs an RPM build. So let's back up. The package maintainers for Vim have just pushed a new commit to disk it on the F27 branch. And the CI Pipeline is going to run an RPM build on that. There's a limited environment that we actually run that in to get the VIM packages out of it. It injects the output of this RPM build process into the OS Tree Compose. And we'll actually compose a new image. These are kind of truncated here a little bit. It'll compose a new image, a new cloud image that includes the OS Tree. And then it'll boot and do functional tests. And then there's some things down the road that are more relevant to the Atomic Host Use case and OpenShift. But you can see that the test pass, this is all defined in the repository itself in the test directory, along with some of the integration tests for Atomic Host and OpenShift. And so we don't, like I mentioned, this doesn't target the entire Fedora Packager ecosystem directly because it's a limited package set and a small number of maintainers and things like that. So we started the Fedora All Packages Pipeline. And this kind of fixed some of the differences between the Atomic CI process and what you would expect as a normal Fedora Packager. So the All Packages Pipeline either performs or imports Koji builds that's dependent on if it's a merge to disk it or if it's a pull request. So if it's a pull request, it'll actually, the pipeline will go out and actually do a scratch build and then use that RPM in the tests. Like I said, there's a similar process, same pipeline style for pull requests. And it's available for any package that exists in Fedora right now. It's not a limited subset of them. The cons right now is the major one is that the results aren't being effectively communicated back to the Packager. And that's for a number of different reasons. But the results of that by the fact that results aren't getting back for existing packages basically means that we're running a whole lot of tests in the infrastructure, more than you might think. But that information is more or less useless because it's really hard to find and things like that. I will talk about the number of tests that are available. This was taken this morning. Let's see, what is this? This is base OS. So there are 793 packages in the base OS group. In disk get, there are 81 packages that have a test.yaml committed directly to disk get. And then there are another 116 that have a test.yaml committed to the upstream first repo. And I'll talk about that a little bit more here in just a minute. But upstream first is a separate place that the teams are using in order to sort of stage tests before they get merged into disk get. So we've got 81 existing plus 116. And then there are 46 packages that are, this is a pending pull request to disk get at the moment. And a few more stats. This is for Fedora server. There are 518 total packages in the group. 77 of them have a test committed to disk get. 100 native them are still being worked on in the upstream first repositories. And yeah, so that kind of brings us to one of the things that is missing a little bit. And one thing that we hope to solve because the outputs of these pipelines are they're putting out messages to Fed message, but that's really the only place that we're notifying at the moment. And so you might have seen, if you're a frequent subscriber to messages in Datagrepper or something like that, is you're seeing a lot of Santas and Fedora. And maybe, yeah, yes, yes. Yeah, so this is, I don't know if they are linked directly from here, but this, yeah, this does exist. These are actually populated automatically, so I don't know if the test directory here was actually created. OK, I will fix that and make sure that we link to those. That's a good observation there. So if you look in Datagrepper right now, you'll probably see quite a bit of message traffic on the org.santas.ci message prefix. That's where all of this stuff is actually getting delivered. We've got some work to do in order to tie that back into some of the notifications and make that available to you. But out there, we are emitting Fed messages for each step that you see in Jenkins there. And so I want to take a minute to just kind of work through what's coming up next and what the focus should be, because the packages are kind of an important part of this process. And we've been able to build all of these pieces of the infrastructure, the individual software, the kind of processes to go through it. But if we don't get more tests and the teams that are working on the FedORCI initiative can't themselves put in tests for thousands and thousands of packages themselves, we're going to need help from packages. And so I'm hoping that we can make it a little bit easier going forward to get these things committed. So let's talk about what's next. Documentation, it was brought to our attention that the quick start on the Wiki needs updated. And it needs to be displayed more prominently. The NVI example that I went through last night, I think there are some simpler things that we could do to make it easy for folks to write simple shell scripts that test their packages and plumb that in directly instead of going with a couple of examples. And just have a couple of actual quick start examples in addition to what exists already. There's some great information out on the Wiki already if you want to write your own Ansible roles for doing integration tests and things like that. But yeah, I think that doing your own Ansible roles is probably like a step two type of thing where after you get your individual scripts written, you want to move on later. Getting notifications enabled is probably one of the things that we should focus on. I know that the all packages pipeline doesn't have a good set of filters in FMN. And so I think we should probably work on that going forward because it's really hard to. We didn't provide a good policy for that and split things out in a way that folks can consume. And then finally, just keep one of the things that we should do is to keep the conversion going. So I showed the upstream first statistics. There are a number of packages that already have tests that have been migrated from various places. But we need to continue that as we go. And if your upstream project has a test suite, I'm happy to talk with you about how to both get that into disk it, but then also approach them about some upstream testing as well. So if you're interested in this, definitely find me. Or you can hang out in Pound Fedora CI on FreeNode. We usually hang out in there. But the conversion process is going to be really important to getting more of those tests into disk it. And I wanted to leave quite a bit of time, actually, for some open discussion. Because it looks like Fedora CI and CI in general is a popular topic. So I'd kind of like to open it up for questions and or comments about the process that you've seen so far. Yeah. Yeah, so the question was we have percent check. Should we choose percent check over writing a new test that runs in the new system? And I think one of the major things is if percent check fails, it's always going to fail the Koji build. That's kind of a consideration you have to take into account. But it's also the fact that you can't do actual install tests and install the RPM on a system, maybe orchestrate it with two or three different systems that are stood up. The idea is you want to install the RPM on a distro and then run a test suite against that entire thing plus your update. And so percent check is good for doing, I think, is good for unit tests in some sense, that tests the internal consistency of the package. But anything else that interacts with the system outside of your package is better for an external system to actually do that. Yeah, in the back. Yeah. So that's an implementation detail because the gating process itself was what was sending messages to the packages. And so that goes to what I said later on about we didn't do a good set of policies in FMN that listened to the all packages pipeline and send notifications itself because we were relying on the gating process to do that for us. Exactly. Yeah, that's a high priority for going forward because I think there's a couple of different cases when you would want that. You'd want the test in place for a while and notifying you before you actually choose to gate on it for a number of different reasons. And I think that's sort of a killer feature that we need to work on. I saw that next. Yeah, so the question is, is there an effort to integrate with Fed package build? And I don't know of anything off the top of my head. But I do agree with you that would be pretty awesome if that process actually waited for the test to exit. The thing is we don't have a guaranteed turnaround time on tests, so it's something you could cancel. But if you wanted to wait on that, I'd love to talk about that and see what we can do to work on that. Neil? Yeah. And that's part of that would be the mechanism for notifications going forward because the original design was to do all of this through Bode, so it would go through the update process. But it is too late at that point. And it's useful to have that information in multiple places. But yeah, I think that's certainly something that we should look at for the notification piece. So yeah, Steph has. Do you have a comment or a question? OK. So to shortly summarize the conversation that just happened, Steph was basically saying that the notifications are important as long as we act on them because Fedora is kind of an interconnected mesh of packages and you don't want failures to pile up and affect other packages. And so we just had a little kind of side discussion about where to place that. And I think we agreed that multiple places is good and that Bode is probably too late. Is that a good summary? Other things to add? Yes? Yeah. Yeah, so the suggestion is to run a second round of testing when the update. OK, yeah. Yeah, so doing combination testing of things in updates testing was a suggestion. Right, yeah. So the question is, is there a place to see the CI for the package building? So you're running the individual test results for your package or? Yeah. Yeah, so I didn't put a URL in here. Did I? That is unfortunate. Yeah, so I will update this real quick. I'm going to post the slides for this and come and find me and I'll get you a link to all of this stuff. It is on the Wiki. There are links to these individual pipelines, but I'll make sure you have that information. Last to your question was there are two pipelines. We have the Fedora Atomic one and all packages one. What was the? Yes, so this one right here is the Fedora Atomic pipeline. We do an RPM build in this infrastructure of the new package. For the all packages pipeline, which did I miss that somewhere? There we go. Yeah, so the all packages pipeline is similar, but it does pull from in the build case, so you just push to a branch in diskit. We take the build that just happened in Koji, presumably. So basically, you push to diskit we wait for Koji to say, hey, I have a new build on that branch, and then we pull that in. For the pull request case, so if you were to go out and open a pull request against a package, we actually do a scratch build for you. Yep, in the back. Yeah, so the question is what environments are available in this pipeline? And I'm sorry I didn't make that clear because there is a container case that you can use in your test.yaml file, but in the standard case for the test rolls, what we actually do as part of the pipeline process, we take the Fedora cloud image and we inject your RPM and then spin up a QMU process in OpenShift to run the tests. So we take the base image, kind of exploded a little bit and add in the test suite that you provided to us, install your RPM, and then let that tiny little VM actually run the test suite. How tiny is that? I don't know. I can take a look afterwards. Yeah, so the question is, the free IPA folks have larger scale tests to run and they're concerned about, call it multi-node orchestration and the resources that are required for that. There's a separate inventory in this that will actually let you do more than just a single virtual machine. But then you can also, if you end up having a really complex test suite or you're doing an integration test suite, there are mechanisms that you can include in the test.yaml to basically provide a list of what you need to ask from the CI system and it will give you what you need, basically. Memory requirements or if you need more than one machine at a time, that's the job of the CI system to kind of figure out. There's an inventory that will allow you to do that, but nobody's using it right now. It's part of the pipeline. I don't think it's using it because they're focused on the single host test. So the question is, does the CI pipeline assume x86? And at this moment, yes. But there are no limitations other than plumbing and hardware into the infrastructures and making sure that we have the scaffolding in place. So there's nothing in the test.yaml file that would prevent you from doing other architectures. Yeah, so the question is, is that the simple Koji CI thing you see in Pagger? And no, that is a separate thing. That's part of the notifications plan. I think going forward is to have a similar little tick in the Pagger pull request that comes from the All Packages pipeline and says whether you passed or failed a PR. No, so you have to go into the actual Jenkins interface, which is nasty. I saw a hand back there first. Yeah, I saw you first, so. OK, yeah, so the question is, is there a way to tell the test system to skip testing on pull requests because of test suite requirements and things like that? I am not, I am blanking on if that is possible, but come find me and we'll figure out the answer together. And so let's talk together about your requirements there, because I could see a case where you run the generic test suite on pull requests, but then have a separate test suite that checks your signatures on, yep, yep. But let's, yeah, let's talk together about that. And yeah, sure, yeah, so the comment was there should definitely be a focus on the user experience for packages. And yeah, if that wasn't explicit, I think we should make that explicit that, yes, we're looking into what that looks like for the package earlier process, because making that smooth is important. Yeah, yeah, so the plan is, so there's the upstream first repository that is sort of the separate place where tests are being staged in. That, ah, all right, right, right, right. Yeah, so there, I don't believe there are any restrictions if you choose one or the other. I'm trying to remember if we make a, yep. Yes, so what I'm hearing is we might want to clarify that part of the recommendation there. Yes, is that? Question in the back. So the question is, this is pretty awesome for single RPMs. Is there a roadmap for modules? I don't know yet. Yeah, so the, I think I repeated the question, is there a plan for modules? I don't know, yeah, maybe that's just part of documentation because the test system itself should be general-ish. And, but if there are limitations or ways that we can abstract some of the differences between doing bare RPM testing and module testing, I think we should, yeah, Steph, go ahead. Yeah, so the question is, can you kick off tests for dependent RPMs? That is sort of up to you as the tester for what is important for you to include in your test or not. So if you have a whole bunch of tight dependencies on other RPMs and you want to run their test suites after that, the Ansible specification will let you do that. But Steph wants to answer again. I've seen that exact thing on Dominic's roadmap for the team is how can I either automatically look at the reverse dependencies and have them trigger all the tests from my package so that I know I'm not breaking anyone or actually list them, the other test repos that I care about. I believe that that's going to happen soon. If it doesn't, I know lots of testing that I'm involved in won't be possible. Yeah. Yeah. OK, so the next talk is starting pretty soon. So the lunch is starting pretty soon. So yeah, that's even more important. OK, so I'm up here. I'll be around all week if you want to talk about Fedora. Let's continue as we go. Thanks.