 So, welcome everyone. Thank you to Monsters, Ghosts, and Bugs. I'd like to introduce myself. My name is Laura Abbott. My official job title is Fedora Kernel Maintainer. I'm one of three people who maintains the Fedora Kernel. This generally involves taking updates from the upstream kernel, fixing Fedora bugs, generally keeping the Fedora Kernel going. Apart from that, I also work on some more internal-facing work towards forward-looking kernels, which generally well aligns with my Fedora work. Being a kernel maintainer ends up meaning I'm a generalist, and I have a background in a wide variety of different areas. I used to work on Android devices. I worked on some stuff related to Android. I still work on some of that. I've done memory management. I've done some ARM architecture work. This sometimes confuses people, and they're left asking me, so what exactly do you do? And the answer is I do it all. And you really end up needing to do all of it in order to maintain a kernel. So I'd like everyone just to think for a minute about what kernel is running on your system. This can be either your laptop you have out right now, another server you have out there, or anything. Now think about that kernel and ask yourself, do you know exactly why you're running that kernel? Can you think of a specific reason why you chose that particular kernel? And this is the focus of the talk, is to get you thinking about what kernels you're running and how exactly it's maintained, and what impact it may have. I titled this talk Monsters, Ghosts, and Bugs, because I'll admit it was catchy. But these are examples of the type of thing you need to think about when selecting a kernel. Bugs, obviously everyone needs to think about bugs. Then there are the ghosts out there, the more complicated bugs and more complicated features you may need to deal with. And then there are monster kernels out there, and I'm going to talk about why some of those monsters may not actually be as scary as you may think they are. Before I talk about the release process, I'm going to talk about how the Linux kernel releases a kernel. You may have seen this in other talks, but this is my version of the entire thing. The usual version from LWN or the Linux Foundation is, wow, there are so many people contributing to the kernel. This was the busiest kernel yet, and this is very true. There are new people contributing to the new kernel every release, along with the same old people contributing. Despite the fact that there are some people contributing, the Linux kernel was released on a remarkably regular basis. This is my visualization I like to use for thinking about the kernel timeline. All the way in the left you have the 4.20 release, which represents the kernel that came out towards the end of December. And all the way on the right is 5.0. Each block represents a week of time. We are right around 5.0 RC4 right now. Later this evening I'm expecting to go back and start working on doing that release. If all goes according to plan, 5.0 will be released at the end of February. How exactly do you know this is a schedule? This is what Linux has decided to do in the past, and we have no reason to believe it's going to change. Linux Torvald still sets the release schedule, and this is the schedule he shows you to stick with. And if anyone's curious, the jump from 4.20 to 5.0 doesn't actually mean anything significant. It simply means Linux decided the numbers were getting too big, something about counting too high on your fingers and toes. All this schedule is so predictable that you can probably guess when the next kernel is going to be available. This is a website, phbcrystalball.org, that's the name's a little bit goofy here, but this is a really useful website for showing an estimate about when the next kernel might be released. Usually used for planning. I say rough idea because things can change. The diagram I gave previously had only 7th release candidates. More and more these days the kernel seems to be getting 8th release candidate, and almost exactly a year ago when Spectre and Meltdown and Bethold stuff was happening, we ended up with a 9th release candidate. Sometimes things may get delayed by a few days, and the kernel community is not interested in your release schedules. I'll talk more about this later. The releases are going to happen on a remarkably cyclic basis though, so you can use this as a rough idea about when things are going to come out. Let's talk about some parts of the release cycle. The first part is the merge window. Right after one kernel is released, the merge window for the next kernel opens up. As the name merge window applies, this is when everything is going to get merged together. This time period lasts about 2 weeks. This is a very busy time for kernel maintainers. They're going to be doing final preparations if it gets true to be pulled by someone, and maybe accepting pull requests themselves. There are a lot of kernel maintainers these days, and many of them don't submit to Linux directly. All the trees will eventually make it into Linux's master branch. And this is an important point. All kernel releases are happening off of Linux's master branch and Git repository. I said this always. Linux did take a brief break in the fall, but for pretty much everything you can assume that Linux is the one doing the releases. And it's also important to note that a patch is not considered fully merged until it was actually in Linux's trees. A lot of maintainers will not be reviewing patches during the merge window. Greg K.H. has an autoresponder bot, email bot that will say, I will not look at this until after the merge window. Once a feature lands in Linux's tree, it's expected it's had some degree of testing. Once it's actually in Linux's tree, it's going to get tested by a much wider audience. As you might expect, this is going to find bugs. The merge window is going to be where most of the bugs are introduced and found. If you're testing during the merge window and identify a commit that's a buggy commit, there's a good chance it can get fixed right then and there. This is why it's important to always be testing the kernel, even during the merge window. Eventually all good things must come to an end, and after about two weeks, the merge window closes and Linux releases RC1 off of its master branch. The good news is at this point, things usually start slowing down a lot. No new major features are expected to come in. This is not always the case though. Sometimes new features may find their way in. The larger the feature, the more unhappy Linux is going to be. There's a schedule to the kernel, and the schedule is that new features need to come in during the merge window. I'll emphasize why this is in a bit. But in general, by this time, most of the new features should be in, simply because it's time to continue testing. I mentioned that bugs are often found and fixed during the merge window. These bugs tend to be sometimes very low-hanging fruit, obvious things that are fixed and can be fixed immediately. Tricking your bugs may have been unsolved or just hadn't been noticed yet because the kernel hadn't gotten enough testing time. You'll see a lot of bug fixes for the current kernel sent during this time. This is also the time to start looking towards the next kernel. Mankaters will start to do more patch review post-merge window. But for patches sent, unless they are fixing a new or major regression, patches will be queued up for the next kernel. So 5.1 for the current cycle. If you're sending a patch now, it's going to be queued up for 5.1. Requesting a maintainer take a patch off-cycle is not going to end well, simply because there's a good chance it could reduce the overall quality of the kernel. And Linux will probably reject the pull request. Greg Gage has a talk on YouTube titled, I don't want your code, which is a pretty harsh name for a talk. But in it, he gives a good example of a story of when he tried to accept a patch off-cycle and how it caused a regression for the kernel. So there's a good reason why exactly the kernel maintainers enforce the rules they do. Eventually a leak passes and another RSE happens. Each release candidate looks about the same, but the expectation is that the number of patches going into each RSE should be getting smaller and smaller since only bug fixes are coming in. Sometimes RSEs may get slightly bigger, sometimes it goes faster than others, time goes on. Eventually, Linux decides that the kernel is stable enough and will declare a release. And everyone is super excited. Brand new kernel has come out. This means it's time to update everyone, right? Unfortunately, you may not want to do this. Just because a major kernel version is released doesn't actually mean it's fully bug-free. After a kernel version is released, it's going to get bug fixes for a period of time, typically until the next kernel version is released. We're currently working on 5.0, which is expected to be released in February. That's going to get stable updates, 5.0.1, 5.0.2, until the next kernel version is released, 5.1. I mentioned stable updates are bug fixes. A stable update contains bug fixes that meet a very specific set of requirements. These are usually small self-contained fixes that are less than 100 lines long. The issues must fix real issues, the theoretical race conditions, and the fix must be in Linux's tree. The goal with these requirements is to hopefully minimize regressions and keep the stable trees overall stable. Because all fixes are actually appropriate for stable, the way a patch is indicated is either a maintainer or developer will market a CC stable in the commit text. If you've ever looked through a kernel log, you may see a lot of CC stable actually in the tree. Sometimes the developer will do this, because sometimes maintainers prefer to do this themselves. Sometimes patches may not be initially tagged for stable, but they can still be queued up at a later date. Anyone can request a patch be queued up for stable. All you have to do is email the stable list with the commit hash in Linux's tree, an example of the bug that fixes, and it will eventually get picked up. Because stable is only taking a subset of fixes, by its nature some fixes will not be picked up. There's been an effort underway recently to try and use machine learning to identify more bug fixes for stable trees. The idea is that you take the existing set of patches that have been queued up for stables, perform machine learning on them using deep analysis, and try and identify other patches that may have similar properties, and therefore will fix other bugs. This hasn't been without controversy, simply because it's actually managed to pick up patches that maybe shouldn't have been included. I'll talk about this more, but this is also an example of a type of project we're going to see more of in the future for the kernel. If you ever wanted to see more details about the stable process, there's a document in the kernel that will give you more than enough information. This is a screen cap from kernel.org when I was working on this talk. You'll probably notice there are multiple kernel versions up here. You'll see mainline, which represents Linux's tree. You'll see two stables, one 4.19, and one that's end-of-life, EOL. Then you'll see a lot of long-term stables marked. These LTS or long-term stable kernels are ones that some kernel developer, typically Greg KH, has decided to maintain for an extended period of time. Greg usually picks one kernel version a year to be the next LTS version. Sometimes he'll announce this version beforehand. Sometimes he won't. Sometimes he doesn't want to announce it because he's found that kernel developers really want to get their patches in long-term stable kernels, and therefore they'll try and push code that isn't really ready into the LTS. So it's an enforced kernel for everyone. Usually without fair fail, at least once a year, you're going to see a new LTS kernel. This is a screen cap in the description of the current kernel releases from kernel.org. As you can see, Greg KH is listed as the maintainer for most of them. Then Hutchings is a kernel developer who also works, is associated with the Debian project and has done some work on that tree there. LTS trees are not actually created equal though. Greg intends for them to be used for different uses. The strong preference here is for anyone who's consuming the LTS trees to use the most recent LTS tree, so 4.19, for one year's time, and then upgrade the next year when the next LTS version comes out. The other older LTS versions are targeted at hardware enablement or other such use cases where updates may not be possible. Even though these LTS versions are getting updates, you also may find that the older LTS versions may not always get the testing and the rate of updates as the newer ones. This is most of the kernel trees that upstream produces. So obviously people need to consume these trees in some form. And usually this is in the form of a distribution. When I see distribution, I mean not just traditional desktop distributions like say Fedora or Gintu, but also perhaps more confined distributions like say Android. And the kernel that's running in the distribution really depends on how the maintainers want to approach kernels. At a high level this usually ends up being a trade-off of stability versus features. I'm going to put on my Fedora maintainer hat for a little bit and talk about how Fedora does kernels to give an example of the trade-offs about stability versus features. The Fedora release model is to readase to new kernels about when they're released. So when 5.0 is released in February, the Fedora stable releases will be getting that kernel. This ends up being the easiest option for our workflow and the Fedora release model. It's also well-aligned with what Fedora actually wants to do, which is to be a leading edge distribution. With each new kernel version, you're going to get a lot of new features and significant improvements. I'm going to do some buzzwords here, containers. Everyone loves containers. With each new kernel update, you're going to get new container features, new things in namespaces. Another buzzword everyone loves is EBPF. This is used for tracing, network work. We're saying a lot of new uses for this and each new kernel version, you get a chance to use this. Harbour enablement is another example of new features you get. A good example is the AMD GPU drivers. There's been a lot of work on those in new kernels in recent years and you get to use the latest drivers with these new kernel updates. I could go on, but you get the idea there's a lot of new features. All of these new features come at a cost. Sometimes the new features that come out don't work well, or even worse, they cause regressions and things that used to work stop working. Real-life example from early December. 4.19 came out, it was pushed, and people started reporting file system corruption and it took a few weeks to narrow this down and it turned out there was an issue with the block layer that went things out. So 4.19 had a major block layer regression that was causing file system corruption. Most issues you're going to see aren't that dramatic. Most issues you're going to see tend to be more annoying. Like say, something doesn't quite work or maybe your display doesn't work as well as you expect anymore. And this ends up being a key point for kernels. How much do you actually want to annoy your users? Users of Fedora know that they're going to be getting these new features, but they also expect that they're going to have to do some work for bug fixes in terms of reporting them to the Fedora maintainers or the upstream communities. Obviously we want to minimize the number of problems that occur, but sometimes you're going to have to expect that bugs are going to happen. I mentioned that LTS kernels have strict requirements about what they take in, and they're only supposed to be taking bug fixes. This means that if you're using one of the upstream LTS kernels, you're not going to see perhaps new AMD graphics drivers or EBPF functionality, but chances are you're not going to see EXD4 corruption issues either. And for example, if your goal is hardware settlement on a known set of hardware, using the LTS kernels is probably what you want and you can perhaps use one of those kernels to meet your needs. The LTS kernels are great kernels that are provided by upstream, but there's a number of distributions out there that don't actually use the upstream LTS kernels and just deliver something that kind of looks like an LTS kernel but follows an independent schedule. Upstream is not really a fan of these kernels. They typically have been nicknamed rather harshly Franken kernels. This is because from the community's perspective they don't know what's in them and to them it looks like you're taking this piece and this piece and smashing them all together and getting a huge, scary monster of a kernel. If you look at one of these kernel versions that has say 4.4, you as an external community member don't actually know what's in that 4.4 kernel. The 4.4 kernel perhaps may have some bug fixes but not others, some features but not others. There's a lot of raised eyebrows and some joking on Twitter when Red Hat announced that EBPF was going into its kernel just because EBPF is a new feature and Red Hat is sometimes known for using this delivery model. Upstream also likes to claim this model also means that these kernels are going to be insecure. This is the argument usually made. Upstream is supposed to be the source of truth and Upstream has identified a known set of bug fixes and all bug fixes can potentially be security issues so therefore by not taking all identified bug fixes you potentially have a security issue. In some respects I can't really dispute the fact that if you don't have a particular fix in your kernel then you do have an issue but part of the point of some of these large kernels is that a couple large backing behind them to actually deliver those updates when they actually need it. Part of the philosophy here is that it is better to select only patches that the people who are maintaining these kernels have selected rather than take everything out there and perhaps have to do more validation on these other somewhat random patches that Upstream had identified and bring into your large code base. Some of these large kernels may have other guarantees for things like binary compatibility so bringing in more fixes than necessary may make things harsher. And if what I've described for one of these large kernels sounds ridiculous and saying why would I ever run that then chances are you're not the target audience. The reason that all these kernels are maintained differently is that you can select one that meets your needs. I mentioned part of the idea of only taking certain fixes is that you want to make sure you know exactly what you're getting. And even with the stable trees, despite the best efforts of the stable trees, they still sometimes do see regressions. There's a famous quote out there which is roughly that given enough eyes, all bugs are shallow. And this is what the current community has worked on for years and years and years. The focus has always been on code review to try and find issues before they get committed. And this has worked successfully for many years given where we are today with the Linux kernel and everything. The stable kernels are a good example about the limits of this process. I mentioned that patches for stable trees were posted to the mailing list and these patches are posted with the maintainers and committers CC'd and they are presumed to be accepted unless someone actually objects just to try and keep things going. And then one of the issues we've seen is that people either don't object or they miss if they should object because the patch is either actually incorrect and shouldn't be applied or there's a missing dependency. This has led to regressions in the stable trees and some people not wanting to use them. And even some developers asking for patches not to be picked up automatically for the stable trees unless they have specifically approved them. Obviously nobody wants regressions and I also don't want to make that seem like these stable trees are horrible and unusable. There's a lot of work going into them and things have improved dramatically. One of the areas in which we're trying to improve stable trees are using things like continuous integration and automated testing. There's a lot of organizations doing things now. But going back to say the big scary monster over there that philosophy is still that trying to deal with one of these stable trees at the moment may not provide what the containers want to work on. So it's better just to do a model of picking and choosing individual patches. I want to talk about embedded kernels and Android as a lesson learned for dealing with new features and shipping a kernel. When I say embedded I typically mean Linux devices that are designed to run for a small specific purpose. Think something like a Raspberry Pi that may be used in an industrial setting that you've never heard of embedded Linux before. Embedded Linux long had a reputation of shipping kernels very out of date kernels with patches that had never been reviewed by the kernel community. This was not necessarily because they thought this was the best way to do this like say large kernels but because they didn't perhaps know any better. That's when some time explaining how the kernel community releases a kernel. And if we think about how this works at this time if you're going to choose a date to ship a product you need to know what kernel you're going to be shipping around then let's say pick a kernel version. If you're going to be shipping with kernel version Y you need to be working to get your patches into kernel version Y minus one or even Y minus two because kernel development and open source all takes time. And like I said if you try and get kernel maintainers to take these patches off cycle there's a chance the entire kernel is going to end up poorer for it. But if you're a company who needs to release a product you ultimately have to make sure you're delivering something even if the kernel community doesn't actually have your patches. You can't really say sorry customer you don't get a product this year because upstream didn't like our driver. So you're going to take what patches you do have ship them on top of what kernel you do have and give that out there. This was the status quo for many years and in some respects it was successful because a lot of these distributions did actually make money but it wasn't a lot of fun to work with. And that was what Android did in the early days. The early versions of Android shipped with a lot of features that had never really been reviewed by upstream. And then eventually the Android team did decide to post these features out there and there was a lot of controversy many emails sent back and forth Google wake lock controversy and wanted to read the entire sort of details. Arguably Android's method for delivering a kernel like this was a success because Android was a commercial product but it also wasn't very sustainable from a kernel maintenance point of view. Today things look much better for Android. Even if at a high level Android is still doing the same thing of taking some out of free patches and shipping them with a kernel. The terrible secret of space though is that almost all distributions are going to be shipping some out of free patches. It's simply to just do little tweaks they want. This is not necessarily a bad thing. The key is figuring out what you're going to do with those patches and how you're engaging with the community. What's really changed for Android is that they're now much more acting in the community and are presenting their work. I mentioned the wake lock controversy. A lot of the controversy around that was that the upstream curve community didn't understand the problem that the Android developers were trying to solve for their commercial product and took a lot of back and forth to try and discover the actual problems that need to be solved. This is what Android has gotten much better at today. Presenting their problems to the upstream community and making sure their needs are communicated for a product. Obviously not every feature they propose is going to be immediately accepted but it's at least gotten out there to start the dialogue. Android is also heavy consumers of the LTS trees testing on every LTS release. This means that not only is the Android project going to be supported, the upstream community benefits because they get the bug reports from what everything finds for testing. I mentioned that Android was a success but a lot of the embedded distributions have also gotten much better these days. Many of the board support packages out there are simply grab the mainline Linux kernel. You could run most mainline, many mainline embedded boards with also the mainline kernel. This has been thanks to a lot of hard work by kernel developers not only to do the upstreaming but also education with companies about how exactly to engage and figure out how to deliver something. And this is the key point to a kernel's strategy. How exactly are you engaging with the community to solve your problems? Maybe the community doesn't actually want your patch but you need to figure out what you're going to do about that. This is a lead into my next topic. People just want to run their own kernels off of kernel.org instead of running a distribution kernel. And I always start with this question by saying this is absolutely something you theoretically can do. And I never want to discourage anyone from doing this for their own personal purposes in terms of learning or getting excited about something. One of the best ways to learn is by breaking something and then fixing it. Compiling your own kernels to apply different patches is a great way to perhaps break something but then also get a chance to fix it. Try turning off different config options and then figure out exactly what's available in your hardware. You can learn a lot by taking a patch that you want to maintain and doing the back ports and trying to make it work with new kernel versions. But if this is for your own personal use that's fine. We're thinking about kernels that are going to actually be used by on a web server that may have services on them. You really don't want to be doing this. I'm really serious here. That gets shipped with your distribution. It's usually well planned out to be aligned with what exactly it wants to support. A distribution has usually made some deliberate choices about saying we want to support x but not y. And so if you're running your own kernel you're trying to do something that the distribution didn't want to do this. Maybe what you want to do is perfectly reasonable, like say pick up a fix or feature that's going to be released in an update soon. That's okay. The beauty of open source is that, like I said, you can absolutely run what kernels you want and do whatever you want. You really just want to think carefully about why you want to be running your own kernel and the work that's going to be involved. And I don't say this to try and keep myself in a job as a kernel maintainer. A kernel maintainer can often be a pretty tedious job and one of my goals is to see if I can automate myself out of a job more. I don't think this is completely possible though. A big part of what kernel maintainers are doing is thinking not just about today's kernel but how they're going to deal with tomorrow's kernel as well. Let's say you found this amazing catch set upstream that improves your workload by 1000% but it's not going to be accepted by anyone or maybe never. So you decide to start running and deploy your own kernels. So you build it, you test it, you deploy it, you're done, right? This is your periodic reminder to please make sure you update your kernels. You cannot just send what kernel out there and deploy it forever. You need to make sure you're getting not just security fixes but all fixes. Everything will be better. And the advice I would give to anyone out there which is endorsed by Upstreet is that if you want to run your own kernel make sure you're tracking one of the LTS branches from kernel.org and take their updates. Otherwise you're going to be responsible for delivering your own updates. I spent a long time talking about how these larger enterprise kernels will pick and choose patches individually instead of doing stable updates. Part of the reason why this ends up working is because there are enough kernel engineers there looking at the kernels to know exactly which patches they can pick up, which patches they are, it is okay to skip and what's already in the kernel. This is a really hard thing to do as an individual or even a small team. Android is an interesting study in this. I used to work on Android phones and at the time we didn't really track stable updates as closely as we should. This meant we were regularly hitting bugs that had been fixed months ago in our systems. If you were around previously for Larry Woodman's presentation on memory performance, one of the bugs that I remember dealing and debugging with was the fact that some pages were underflowing simply because of the configuration we were using. It had already been fixed and stable but just wasn't in our tree. Not a lot of fun for anyone involved. If you're not running an LTS branch and regularly picking up those updates, you're forever going to be playing catch-up about trying to figure out which of those stable updates will apply to your off-cycle tree. Upstream is really doing a lot of work to identify bug fixes and package them all up. You really want to be taking advantage of that. Even if you use an LTS kernel like everyone recommends, there's a good chance that you're eventually going to have to do some work to keep your arbitrary patch up to date. Eventually you're going to run into merge conflicts and I'd say that merge conflicts are why I will never be able to fully automate myself out of a job. There are tools out there to make dealing with backports and merges easier but ultimately it's going to have to be a human who spends the time to think about and understand the code. There's a new structure added. The semantics of the API has changed. You have to figure out exactly as a kernel maintainer how you're going to fix that to make it work. Even more fun is when a patch does apply but it still doesn't work because the semantics have changed. Another story from my Android days. I once helped with a kernel update and did a backport and merged incorrectly that resulted in a memory accounting bug. This went on for months with some weird behavior and it wasn't until I actually got enough reports of negative memory pages that I could finally figure out exactly what was wrong. Again, not a lot of fun for anyone involved. If I could go back and give myself some advice, I would probably have tried to get more people to actually review what I was doing and say, does this look right? Because I remember I wasn't actually sure if I had done the work right. Lesson learned. The reason these large enterprise kernels actually work is because there are enough eyes and people reviewing the patches to make sure things are done correctly. Like I said, I never want to encourage anyone from doing this learning just because if you're doing this work on your own and it's a great way to learn, I actually learned a lot about the memory enhancement system by breaking it and then fixing it. But if you're thinking about how to rule over kernels for, say, a service or a product, you really don't want to be doing this and you want to make sure you're setting yourself up for success. Then we have our favorite bugs exposed last year, Spectre and Meltdown. One of the ghosts from this talk. There's been a lot of things written out there about how this disclosure was handled absolutely terribly. People have given talks about this and I would absolutely agree that this process was not great. We certainly learned a lot and later variants were handled much better. But we're never actually going to be done with security. I'd like to believe that everything is going to be handled perfectly the next time we have a security issue, whether it's in software or hardware, but I'm also a pessimist and believe that there's inevitably going to be some sort of problem. If you're maintaining a kernel that's, say, not tracking one of the LTS kernels or even if it is tracking one of the LTS kernels, what's your plan going to be when you wake up and look at the news and see, huh, there's a fun new zero name in the kernel? You are ultimately going to be the one responsible for applying that update to your tree and getting it out. I've spent some time talking about backports and how you can do those incorrectly. If you're the one applying that to your tree, how confident are you that you've applied the security update correctly to make sure the issue was actually fixed? There have been some instances where distributions have not always applied this correctly and they've had to go back and try and fix it. Kind of embarrassing. This also leads into my last point about I made it seem like distributions always get this right in terms of getting everything I spelled out, but this isn't true. Distributions can make these types of mistakes in terms of dealing with backports and out of free catches, but part of the advantage about going to the distributions instead of going with your own kernel is that there are already people out there looking and working on a kernel. So my advice is to take advantage of someone else's mistakes before making your own. That's about all I have to say and I want to wrap things up. The question about which kernel you pick ends up coming down to what's your focus If you don't care about the newest features, it might make sense to run an LTS kernel that just gets bug fixes. If you have stricter stability requirements or are particularly picky, maybe one of these large, scary kernels is actually right for you because it's not that scary. If you actually want the newest features, maybe you can just get away with one of the latest stable kernels. And then if you want to run your own kernel, you certainly can, open source lets you do that, think twice about what you do so. And more than anything, when choosing your kernel, you really want to pick something that's going to make your life easier and what you care about. Questions? So the question is there's an upstream project about using machine learning for stable updates. Are there any efforts by Red Hat or Fedora to try and do something like that? I'm going to just speak for Fedora. Fedora is an upstream consumer of the stable trees. We pretty much take that directly. So it doesn't really make sense for Fedora to do anything more than that. But at least I've talked with several of the maintainers of that, Sosh11, who's done the work and tried to give him some feedback about other things like that. So we are looking at that for things like that. Thank you very much.