 Okay, I think it's probably about time, so we can get started. A quick poll of the room. Who saw me give a talk with Steph at Flock last year? Okay, who's built the kernel, the Fedora kernel? Okay, that's helpful for helping me know where to talk about, because today I'm talking about the Fedora source get tree. For a quick poll of the room, I asked about who saw me talk last year with Steph Walter. We talked about the elephant of the room, which was talking about the fact that the rel and the Fedora kernels are supposed to be the same, but they aren't. They differ drastically, and we went through a lot of reasons why they were that way for some good and some bad reasons, and why we wanted to potentially make them the same, and had a whole bunch of grand plans and everything like that. And as a short update on that, what we wanted to do over the past year was hopefully have Fedora and rel be the same, just because if you've heard any of the number of talks, the grand story is supposed to be, okay, here's Fedora. Fedora is where we do all our testing. Everyone does all their work there. We find all the bugs. Then we go after rel, we make it even more stable. We sell it. We make a lot of money. Sounds great. I know. But it turns out that's not quite the story, and we wanted to make everything the same, so we could be more stable. But what ended up happening over the past year is, I think from the Fedora side, there weren't a lot of changes that people would probably notice. Maybe a few spec file cleanups here and there. There was a lot more work internally on the rel side of things, mostly a lot of discussion about what Fedora actually means, what rel actually means. One thing I always want to talk about whenever we bring up Fedora, people always appreciate the Fedora community. Even if things aren't always in line, people love the Fedora community and all the work people put into it, which is always great to hear. There were some minor changes for Fedora, spec file cleanups, and one of the other big things we've gotten over the past year is a lot more CI for the kernel in terms of the upstream CKI project, which Red Hat has started an attempt to try and do continuous integration for the kernel testing. There's been a lot of good news. One of the things that's finally come out of this entire project is the fact that we now have a public git tree available. This is a git tree that matches something that's approximately what we want to see in the future versions of rel. What this contains is patches we want to bring in, various packaging scripts, and just designed to show exactly how we're building the rel kernel currently and being able to get that out there. It's out there. It's available. It's certainly not public and there's a lot more things we need to do, but getting it out there was a good milestone. The focus of this talk is talking about that tree as a source git tree and why eventually what we'd like to do is use the source git tree directly instead of going for the disk git tree. The pull of hands is that if I say source git versus disk git, who needs clarification on what exactly the difference between source and disk git is? Okay. Yeah. So just for the 30-second summary for those who raised their hands just in case, is that the disk git is where we end up with the Fedora packaging. The source git tree is where the actual development happens. There are lots of reasons why these are two different things, but the point is that we have two separate repositories, so we end up needing to do two things slightly different. Part of this different reason why we want to possibly make these similar is that developers on a day-to-day basis really like the source git disk tree. They don't really want to deal with the disk git side of things. This is the dream, really, at least, what we want to see is that a developer or a maintainer does all their work in the source git tree, and then once they decide, okay, it's time to actually send this out for release, they run a few commands, and then a bunch of scripts or automation or magic basically moves it all into the disk git. And so the point is that the developer or anyone else never really has to think about what exactly goes into the disk git, because it turns out the purpose of the disk git is to be able to build an RPM. And I mentioned maintainer commands, but part of this could also be done by automation. So the idea is that you'd have, perhaps when you have a new release, automation would automatically create a new release and commit it to disk git, so you wouldn't end up having to think about it at all. Yes, this talks about a lot of reasons why exactly we like the source git tree. It ends up being, when it comes time to actually build the kernel, you can use the same build commands as upstream. So I asked how many people had built the kernel. And usually when you want to build a kernel, you say, do make def config, make menu config to configure your options, then you type make to build the kernel, then do make modules install. These are fairly standard instructions that if you Google about how do I build a kernel, you're going to find. From experience as a maintainer and trying to help people on IRC and on various forms, getting people to understand how to build a kernel LaFedura way is not always the easiest task, just because the instructions are documented, but they're not always the first hit on Google. So having something that people are actually familiar with makes everything a lot easier for users. I'll talk about this in a bit about making an RPM with the existing tree. It's not quite the same as upstream, but it's something similar. As long as we're also talking about users building their own kernels, another sticking point is if people want to build custom kernels, they're usually doing it because they have one of two things they want to make changes to. Either they want a new module, a new configuration option, or they want to apply their own patches. Who here, for those who have been around a very long time, has tried to patch an RPM using the RPM build method, and manually, yeah. That's not a lot of fun. These days, Git is the preferred tool of choice for almost all development. And it turns out that being able to say, okay, I want to test this patch or even this branch, just do a Git merge into the same tree. And then being able to run the same tree, the same commands is so much easier for people than trying to explain how to get things into a diskit where you end up having to do more steps, copy it into the diskit, and bring things out. Really a big pain. And so it just makes things so much easier. And it really goes to, this is what people want to consume and what we actually want to deliver. And it also gets back to the point about talking about going from source Git to diskit, more opportunities for automation as well. It is, is that the existing build and testing infrastructure can work on source Git tree as opposed to diskit tree. It's out there. And so that's a lot of the benefits about why we should have one source Git versus diskit. And part of the point is when we talked about over the past year about, we had this idea about, well, okay, so Fedora and REL should be the same. It turns out that Fedora and REL will never actually be identical for a lot of reasons. A big thing is because of the things that make the Fedora community really great and unique is being able to do things fast and experimentally. Matthew Miller likes to say we are the leading edge. And the idea with being a leading edge is that there's a fairly low barrier to entry for trying new things. For the kernel is that if people come in and say, hey, this module is off, can we turn it on? If the kernel maintainers review it and see that, okay, looks like it's on, looks like it's not gonna break anything, we turn it on. That can happen in turnaround in less than a day. The other side of things for REL is that there's a lot more process involved simply because they have different requirements. This all comes back to Fedora and REL are never exactly going to be the same. But there's a lot of things that it turns out they can actually share. They should be able to share the same spec file. There shouldn't be too much different between them for no reason to have different build processes. The same packaging scripts in terms of being able to go from source git to disk it. If we think about what we're actually shipping, maybe REL ships a few more files in there, but ultimately it's still just going to be the same thing between Fedora and REL for shipping a kernel, some modules, and maybe a few other things out there, hand waving a lot there. And I mentioned the configuration files, and this has been one thing I've spent a lot of time trying to figure out how exactly to manage that. Ideally, what we eventually want to do is to have both Fedora and the REL configuration files in the same source git tree and make it very easy for people to be able to switch between them and test them. We're looking about different ideas about how to do this, but one of the ideas is possibly just to have one of the set for the REL versus Fedora be another kernel variant. How many of you have installed kernel debug before? Yeah, so that's, yeah, I mean you've installed that for building, and so the point is that if we can have kernel debug, it turns out it's pretty easy to expand that concept and say, if we have kernel debug, why not have kernel REL or another name people want to come up with talking about, it's another set of configs that's available, automation contested, people contested to try and see what's available. And then, with the point about having different variants is that it does get down to the point about we do also carry different sets of patches between REL and Fedora. Some of these are different for good reasons, some of these are different because people haven't put in the time to make them come in, but theoretically there's no reason why the two couldn't potentially carry the same sets of patches at least just for building and then we have a single kernel tree again to testing and make it easier. None of this is saying it'll actually be perfect or easy, but the point is that if we can get something that's approximately right to do the amount of testing we need, it makes it easier. Okay, so here's some gotchas about the existing model, the tree we push publicly. The tree that's currently public is that I did some work on top of that to be able to use REL files versus Fedora files based on the disk tag. So the idea is that it's hard-coded to say that a list of REL tag versus a Fedora tag uses the, selects the correct set of files. That makes it actually a little bit tricky to potentially have one tree that can work for both. This is certainly something still trying to figure out. Another thing that is potentially a little bit interesting is looking at how raw hide does snapshots is that the existing REL code didn't have support to do the daily raw hide snapshots that Fedora relies on. I had to bring some of that in and potentially look at how that's generated and maybe it's something for Fedora to look at as well to make sure that's gets synced up. And then there's the sticking point of the change log. Every RPM has a change log, which is supposed to be updated with a description and sometimes people have brought up with, well, if we have everything in get these days, why do we have the RPM change log? And the answer comes back is that if we think about who the target of the RPM change log is, it's not the developer, it's the sysadmin or someone else who is installing the package and wants to know exactly what's changed and they're used to running RPM change log to see what's in there. So for now at least we still need to be able to keep things in the RPM change log, but it also turns out that trying to generate the RPM change log from the existing scripts doesn't quite work with rebasing every kernel version because of the way things are structured. This should just be a to-do that's to clean things up, but it's just something more we need to work through. More of things that drifted between Fedora and Vrel are choices about how exactly we split things into different repositories. Who has installed the perf tool before? Okay. That comes from a package called kernel tools and kernel tools is a bunch of user space tools that get built that are part of the kernel tree but are not actually part of the kernel itself and it turns out because we're building a set of user space tools we end up into user space dependency problems and trying to build that all in the same kernel spec file as a separate package was kind of difficult at times so I split that off in Fedora to be able to build that specifically and then Justin did some other work to be able to split out kernel headers as well so we're now building that in separate repositories and for the most there are sometimes some hiccups with Fedora but I overall think that it's worked out fairly well to have just the core kernel being built in the kernel package, user space packages, user space tools, user space headers in separate repositories but trying to bring this back together for RHEL is a little bit trickier so we need to figure out how exactly to bring those back together and another sticking point here is sometimes it turns out we may need to go back from dist to source git a good example is that system wide package changes to clean up various parts of the spec file these sometimes happens because it turns because there are very dedicated people who take the time to garden all the packages and clean up craft we don't need if those are just going to the disk it we need to make sure we have a way to get those back and forth and we're not overwriting everything there there's also some details about secure boot but again so this is a long way of saying that we have a proof of concept out there but it's not quite ready and then I'm gonna give a short demo of a kernel tree, okay this is the working tree that what I tend to do a lot of on my day to day basis so start with a make RH clean there are these are a set of custom commands that have been added by Red Hat Kernel engineers to be able to do all the Red Hat stuff everything lives inside the Red Hat directory so let's say I make a change to a kernel configuration option and I want to make sure the configs work I'll do make RH I will spell it correctly and it's gonna go through build and process and do a bunch of the checks to make sure all the configs are actually okay and then if I want to do a little bit more checking maybe I made some changes to the how things are going into the RPM spec file I might do make RH prep we'll do some commands and it will run the prep stage of the RPM and this is pretty boring you can see exactly what I am doing with large portions of my day so if you ever wonder why kernel developers seem to get so bored this is why we can watch this run for a little bit but also uses an opportunity to take some questions right now about halfway through I'll talk a little bit more but if anyone has questions right now I'm happy to take them yes yeah the question was about was there any talk upstream about splitting the tools into separate projects perf, BPF tools hypoven into a separate project and the answer is that yes often on this has been discussed and that some a lot of this happened well before my time I got into kernel development but it sounded like this was kind of controversial about where they ended up and for now I think the plan is for everything to live in the single kernel repository but I've also heard some other people who are say on the tracing side of things who are interested in maybe splitting it out we'll have to wait and see exactly what happens there is some interest but I don't see it happening anytime super soon yeah you've just correctly described the kernel process about bringing stuff up every few years and then sometimes things change Kevin is there any thought to maybe building the build system as regular or? the question is about building the kernel as a sub package and the kernel at build system as regular I'm not quite sure if you're using the same terminology I was that when you think of sub package are you thinking of having it be in a separate repository and then you change like a chain build or just another package that gets generated another package yeah that available if it were used yeah that's exactly what I was what I was looking at and possibly suggest could suggest and that's one of the ideas we've been kicking around to try and do things we don't have anything concrete right now to be able to do that you you have yes you have exactly hit on one of the key issues and trying to figure out what to do is is that the kernel build process already takes a very very long time in building yet another variant ads more time to the build for things people may not actually want so these are trade-offs we're going to have to figure out okay this is actually pretty boring so I'm just going to get back to the actual stuff people want to see okay so one of the things that sometimes gets brought up is when we talk about making changes like this it says okay if the current team is potentially looking at moving to a source get tree model what does that mean for the fedora community left to do will the fedora community not be allowed to make any changes because it's all going to be restricted and I just kind of want to emphasize that what the fedora community does is really helpful a lot of what we put out here is I the that we push publicly is not actually what I consider core kernel development it's a lot of scripting and packaging and other work that isn't always the expertise of the kernel developers and this is one area where I think the overall the entire fedora community really shines in terms of helping packages get up to scale there was a talk I didn't get a chance to see to about packaging mistakes and we were joking that the kernel spec file was probably doing a lot of them just because we end up trying to do workarounds for various things and so this is an example about something where I think the fedora community can really help make make the kernel package better in terms of figuring out better ways to do things we certainly have our own scheme for generating current configuration options and how everything goes but I'm certain there's better ideas about how how that could go as well and really we just need help in terms of trying to maintain things so this is I want to talk about the kernel configuration scheme that we were looking at trying to do how many people have ever looked at the kernel configuration scheme okay yeah it's a a pretty small number how many of you have I asked about how many people built the kernel and one of the steps people tend to do when they build the kernel is to make menu config to get a nice menu of options to be able to scroll through and maybe find what you want to turn off and on to configure the kernel due to the way we produce kernels currently that doesn't quite work instead we have a series of directories that describe what what configuration options we turn off and on for various architectures it's kind of out of scope for this talk but I'm happy to describe it but one of the things we were looking to try and do is to actually have fedora and rel share configuration options in some way and the point is that there's some number that are going to be common in terms of core features think about your file systems think about your core IPC functionality core drivers those are the same between rel and fedora let's keep them the same there's always going to be some fedora things that are different there's going to be rel things that are different one thing is that we're potentially looking at doing is having the fedora and common be updated and then just try and tell people please don't touch the rel configurations part of this is not this it's not because we don't trust the fedora community but because the rel kern developers like to make sure they reviewed all the options very closely because some of them can have unintended side effects even turning on modules I thought would be safe that I've gotten informed no no don't turn this on you're going to be doing something interesting so it just makes a little bit easier getting into a little bit of the nitty gritty details is that you may have heard both brendan and other people talk about the idea about okay we want to eventually by the time rel 9 rolls around have everything moving faster and up there and so the end part of this is hopefully to make fedora rawhide more stable and close to rel so for everything to be able to do all the testing and this kind of gets back to the point then sure we can certainly have people put up there but how exactly are people doing code reviews because if you've ever monitored anything about the kernel before is that it still has a heavily email based workflow which as some people try justifies being around for good reasons I don't think is around for as good reasons but that's best discussed over beverages but the point is is is that is we're moving to a workflow that say maybe having a treat more public we need to figure out exactly how people can do reviews and do these reviews publicly and this is a lot of time I've been discussing with people internally it's exactly says everyone is used to doing all these reviews and everything internally getting ready for this how exactly we're going to do this publicly are we going to do everything on a mailing list or if there's something else to try and do we do eventually want to have everything out there publicly my dream of some sort is is that there is automation out there generates a branch when say a new kernel RC comes out it does everything automatically generates generates the commit generates a branch the bot also adds in all the appropriate kernel developers come and say hey this this change looks like it brought in some changes maybe in your area or added some new configuration options and this is happening somewhere publicly those current developers get added they can review and take care of everything pipe dream were maybe right now but we still want to try and work through because the point is is that we're doing everything out there in public you can see exactly what's going on this is about where I'm wrapping up and I know I gave people a lot of big dump of information and tried to show exactly what's out there and if I was trying I'm trying to make it sound like this is what we want to do and we're not actually committed to using this at the moment it's still certainly a work in progress there are things we need to figure out and I'd kind of like to use the rest of time to see if anyone out there has any objections or thoughts about what else we would need to make this work and what would make the fedora community confident about switching to this model for the developers to use obviously we don't want to risk regressing anything in fedora and yes peter yeah and I think the peter talked about how grub has its own set of scripts to be able to do this and I think that's actually a very good point is that part of the reason we want to do this publicly is to try and is because we're certain that the kernel is not the only package that is doing this or wants to do this and we'd like to be hopefully to be able to use common scripting because what we have is a whole bunch of shell scripts and some ox scripts thrown in together to be able to tie everything together and as someone repeated always repeatedly says I can't believe you're still using the ox script I wrote in 2012 for something so yeah I mean which kind of goes to a bigger point though is that is there something else out there that's maybe a better maintained package or anything else out there if you've seen the talk about packet or other work to try and do some of the disk get packaging automatically maybe that's an option I think packet has is still focused on other things right now but the point is is that that may be an option to try and do things other out there so okay that's that's a a good question on the question was about are there parts of the real tree that cannot be public and the answer is is is that what you we said was true in the past but we had actually went back to the rail council and for them and that we now have general approval to do most things publicly so the point is is is that configs cannot be public the packaging red hat directory can be public the general packaging uh... can be public as well the point is is that what we're proposing is things that are going ready for the next point time for fedora rawhide you're right that eventually at some point we will have to uh... branch off for real but the point is that until you branch off you can do all this work up there uh... yeah there's that that is a good point about figuring out how to do embargoes and potentially hardware and this brought up at meetings i went to a few weeks ago and i had a proper say for the embargo it turns out that fedora will probably just continue with own thing because it when i'm talking about trying to bring ground for closer together it's essentially just saying what we're calling the future rel is essentially just fedora but with maybe a different disk tag or other magic behind the scenes of the point is that anything that was in barcode for fedora before would still be embargoed and release before as far as how pre-release hardware works that is an open question we would need to figure out right and i i'd say that that's because the real base that we're looking at is much older and there's a reason to do that so jim within the sentos community right now they tend to either go toward l repo or something else where they can grab something like this would this be possible to build this newer kernel for yes what i would love to see is we provide just the source kit tree and then this the source kit tree can potentially be built in any number of build routes out there so the point is that it's covering all say for our cases potentially a sentos case potentially abstract rail case not an offer to you know do anything but the point is that it's out there and because i sometimes undersell the point of the build route but i'd argue is that once you have the source kit tree this should be something that there's no reason why with a few tweaks it couldn't build in other build routes which isn't to say that we don't have to think about what tools we're using to do that but i think it should be a goal that this should be build in as many say forward-looking build routes as possible yes yeah i mean it ultimately comes down to i think we're trying to look at a core set of tests about things we can actually test out there i'd say working to try and get things like making sure containers doesn't regress networking wasn't regress some tests like that the examples you're giving them for things like say virtual box is that if you can come up with a good test case that we can say plug into either CKI or the kernel testing harness we will certainly take that to be able to make this more stable just yeah and then there's there's also the fact that if you've ever heard Steph Walter talk about this his stream is actually to continue to try and push this further up streams at the point is that upstream is running CA on these things that upstream doesn't actually merge any of these things so upstream is running the virtual box tests things like that there is a lot of work to certainly be done there but i mean it's certainly something to think about that in terms of what's actually should be considered gating or not so yeah um what i've certainly seen open QA work before that's that's kind of a certainly a separate question something we will work to continue to support as long as for support to open QA will certainly work to do that as well also for the curious to say what the output looks like when you finish running make RH prep so this means the kernel tree had passed the prep step and i could sit here and do make RPM but given the Wi-Fi network i don't want to sit here and try and upload potentially upload things or to do that but yes the answer to both of those is yes we have both thought of about using it and it is very contentious there are a lot of reasons why we do want to move to pull requests mostly because one of the things people are looking at internally is about how to try and make the kernel maintainers job much easier because there are a series of maintainers who do a fantastic job making sure everything gets in they read a lot of email they were they bring in patches they do builds they do so much valuable work to help make sure the rel kernel actually gets out but it's also a fairly tedious job and one of the things that we think pull request could help with is making it so that things get merged and possibly tested automatically so yes we are looking at trying to do that but like any workflow change it has some people who want to make sure things don't break things too much so if you have suggestions about helping to move people to pull request workflows who aren't familiar with it i'm certainly yes so internally we use a tool called patchwork to be able to help track the patches to see exactly the status of those and patchwork is an upstream project that all that internally we provided some enhancements to for our needs but it gives you the status of things and part of it is in fact that the maintainers are required to read every mail that comes in and that is a lot of their job is reviewing all the patches to help to make sure things come in and one of the other things we're currently trying to help figure out for maintainers is that who's actually responsible for making sure things don't get lost is that sometimes it comes down to the developer needs to make sure that the onus is on them to making sure things actually get in and again this is something that ends up trying to figure out how things work so anything else well thank you very much for listening to me talk about kernels for a while I'm happy to take more questions offline thank you