 I guess I would like to thank you all for coming. My name is John. I'm here to talk about the kernel and I will start kind of as I often do just by talking about what the kernel community has done recently and then get into the more interesting stuff after that. This is what the release history looks like over the course of the last year. You can see that we have put out six major kernel releases since October of last year. Each one of these is a major release incorporating all kinds of new significant features. An awful lot of changes, something like 13, 15,000 individual commits from a very large number of developers. It's really the same old chart that I put up every time I do this talk. It doesn't change a whole lot. In fact, I could putting up high tide times or something like that and be imparting about the same amount of information. But it is worth pointing out that 6.2 released back in February set a new record for the number of developers participating in a single development cycle. We had almost 2100 developers adding code to the kernel over the course of about 9 or 10 weeks. That's a lot of people working with our community. And although the number of developers has subsided a little bit since then, it tends to go up and down. I don't really doubt that we will set another record sometime in the near future. That last column that I've added there is the number of first-time developers participating in each development cycle. So we see that we're getting somewhere between 200 and 300 first-time developers in each development cycle. We have that many people coming into our community and adding their first patch to the kernel. So we still have a healthy flow of developers coming into our community, which is a good thing. As long as that continues, I believe the kernel community itself will remain reasonably healthy. So one other thing to add on to this chart, of course, is the 6.6 kernel, which is currently under development. These are numbers from a few days ago as of the RC1 release. This kernel can be expected around the end of October. Has already merged over 12,000 change sets. We'll see that probably approaching the same 15,000 number by the time it's done and all the bug fixes go in. There's a lot of interesting work in this kernel as well, including some major changes to the CPU scheduler, which we haven't seen for quite a while, and an awful lot of other stuff. So there's still a lot going on in the kernel community, really kind of business as usual, though. These, of course, are our mainline releases. These are the releases that Linus Torvalds puts out before he leaves the building and goes on to bigger and better things. But these are not the kernels that most of us are actually running, right? Some of us run mainline kernels, but most people are smarter than that and try to run something else. So they're usually running something based off the stable kernel updates. So this is the current status of the long-term stable kernel updates. There are six of these updates under long-term maintenance going all the way back to 4.14, which was released almost six years ago. You can see that we get a lot of updates, over 300 of them for 4.14, and many, many thousands of fixes going into these kernels after they are released as a mainline release. So the volume of these fixes is an interesting concern in its own right, and I'm going to get onto that in just a moment, but I want to use that sort of as the transition to the main part of the talk, which I am focusing this time on key questions that the development community has to answer over the course of the next year or so. I think there's a lot of things that we need to talk about, a lot of things that we need to figure out, and so that's what I'm focusing my talk on now. So with regard to stable updates and things related to it, the first question that I want to put out there is, what should our users actually be running? Because I've just said that we aren't running the mainline kernels that Linus is putting out there, right, that we're running something that is further developed, at least further refined after that. So one answer to this is, if you're going to tell somebody what should you run, tell them to run the latest stable kernel update, because that is the most stable, the most secure, the best kernel that we in the kernel community know how to create at this time. It's the very best we can do. So you should run that. This is a position that is becoming increasing in the common within the kernel development community itself, that those are the kernels that people are running. This is a quote from Greg Kohartman, who is the manager of the stable kernels, saying that you really have to take everything that comes into these updates, all 20,000 or so fixes, because otherwise you will not get all of the known fixes. You will have known bugs. You will have known and unknown security bugs. The only way to avoid that, the only way to get the most secure and the most stable kernel is to run these updates that the community is putting out there for you. So you can do this, but this raises some interesting questions, because if you look at this, right, the number of fixes here is not small. 4.14 has, at this point, these numbers are actually about two weeks old when I made this slide. So 4.14, I think, is probably approaching about 28,000 commits to it at this point. That is two full development cycles, full of changes added to an ostensibly stable release after it was released. So there is an awful lot of development, in a sense, going on after our stable kernel has already been released. That is an awful lot of changes and that raises some eyebrows somewhere. I did some looking to see where these bugs are being fixed are coming from. This is a little bit small, I am sorry, because it covers the entire Git history, but way up there at the top is a line saying that of all of those 28,000 fixes that went into 4.14, about 1,400 of them were fixing bugs that were introduced in the 4.14 development cycle itself. Everything else came from a different development cycle, mostly previously. So you see this long, long tail of releases here. Essentially, every kernel release we have ever made during the course of the Git history has introduced bugs that were only found and fixed sometime after the 4.14 release. So a lot of these bugs stay around for a long time. This bottom line here, that says 2612, really covers everything that was introduced into the kernel prior to the Git history beginning at the beginning of 2005. So those over 200 bugs were introduced in the kernel at least 18 years ago and took that long to be fixed. So there is an awful lot of bugs that work in the kernel for a long time before finally turning up somewhere and needing to be fixed. All of this stuff gets scooped up and put into all of the stable releases when those come around. So if you are running a stable release, you are getting all these fixes for all these bugs that have been introduced over the course of the kernel development history. And there is value to this. Android has been pushing very hard towards its generic kernel image and has been basing those on the stable updates and has been trying to increasingly aggressively track the stable updates over time. The people who are interested in Android security at Google are pretty good at their job and they are pretty serious and they spend a lot of time looking into this. And they have looked at the security problems that have been found in the kernel and how they affected the Android kernels. And what they have found is the vast majority of security problems that are disclosed in the kernel have already been fixed in the Android kernels before they are disclosed because the fixes found their way into the stable updates and were already incorporated before anybody knew that they were actually security related bugs. This is one of the interesting aspects of kernel development is that almost any bug can be a security bug and you don't really know that it is until somebody finds a way to exploit it somehow. So an awful lot of fixes go in and they are not marked as security fixes not because the kernel community is trying to hide its security fixes. I mean sometimes there is a little bit of sneakiness that goes on there that I personally don't like but most of the time it really is just that nobody knows that this bug is a security bug. It is only later that somebody figures this out and so the only way to protect yourself against these sorts of bugs is to put in all of the fixes. So this is what is happening here. This is how Android has benefited from it. But there is another side to this that you really cannot gloss over although some people perhaps do. And that is shown in the second chart here. This is a chart of the bugs fixed in 4.14 that were introduced in kernels after 4.14. So that bottom line there is bugs introduced in 4.15. The 4.14 stable updates that fixed over 200 bugs introduced in 4.15. And it goes on all the way up to 6.4 there. Now one might wonder how a kernel could be affected by bugs introduced later after it was released. And of course the answer is that these are bugs that were introduced by the fixes that were applied to later kernels and then back ported into the 4.14 stable releases. So another way to look at this is to say that if you are going to take a kernel and you are going to back port 28,000 fixes into it there is just no way possibly that some of those fixes will not contain bugs. You are going to introduce bugs. There is no way around it and then you are going to have to fix those as well. That is just the way of it. So we see this. We see that the older releases have more bugs. The newer releases have fewer of them. Now it is possible that we are doing better now at not introducing bugs. Our testing story in the kernel has improved considerably. All that we are finding more bugs sooner before release and so on. But there is nobody I think who wouldn't tell you that the reason that these lines are shorter for the newer releases is simply because those bugs haven't been found yet. This other chart tells you it takes a while. So that chart will surely flatten out over time. So we are introducing bugs as well as fixes when we put a lot of fixes into our stable kernel updates. And this really makes some people nervous. There are people who want to deploy a kernel somewhere, see that it works, and want to be able to apply updates to it without the fear that it is going to break. They fear that quite a bit and rightly so. And so this leads to the other approach that you see, which is to instead take an old kernel, stick with it, and only apply very carefully selected updates that you know fix specific bugs that you were worried about and ignore the rest of them. This is the enterprise kernel model, of course. This model too has some downsides to it. Here too, you end up with kernels that have thousands and thousands of patches applied to them, including usually the back ports of major features as well as bugs because even though you are running an old kernel, people want it to work on new hardware and things like that. So you get something that looks very far removed from any kernel that the community has released. You are getting something that is unique to this vendor and what they are putting out that completely isolates you from any kind of community support because the community has very little visibility in what is into these kernels, cannot and will not support them, and leaves you dependent on whoever it is who has done all of this work, whoever you are paying a support contract to fix problems in it. And this of course leads to problems because people want to run these kernels and not pay for those fixes. We have seen an awful lot of fuss recently about what certain enterprise vendors are doing with regard to access to their code and their distributions, right? And the fuss around right at Enterprise Linux in general. This here is what that fuss is about, right? I mean, this applies across the entire distribution, but a huge amount of that work is at the kernel level and the work that they do to make their special kernels, right? This is what a lot of that fighting is being driven by. So it is a model that works for people, but it is a model that has some real problems and is a model that is creating stress within our community. So just to conclude this section, I think that this disagreement is likely to go on for a while that we have not really adequately answered the question of what is it that is best for our users? What should our users actually be running? In the end, the users of course themselves have to make that decision. I think we are seeing a bit of a shift towards stable kernels, but it is far from a stampede at this point. So expect to see this going on for a while. And I wanted to make just one last note about the stable kernel updates. I put this list of six kernels here that have been maintained back about six years. That is what the stable kernel update, the stable kernel team have been doing for a while is maintaining kernels that far back. But they have come to the conclusion there is really no point in maintaining them for that long because people are not using them. So the six-year update policy on these kernels is going away. When 4.14 goes out of support, which is likely to happen early next year sometime, there will not be another six-year kernel name to replace it. So these older kernels are going to go away and we are going to likely go back to a world where the long term stable kernels are maintained for about two years and then after that people will simply be expected to update to a newer kernel, which we really hope to be something that is safe for them to do. It will be a jump for some folks, but that is really the way it is going to be because the usage of the older ones was just not being seen. And as the kernels get older, it just gets harder and harder to backport, fixes to them in any kind of safe and stable sort of way. So that is the change that is coming and we will see that start to manifest itself next year. So for something completely different. Can't do a kernel talk without talking about BPF these days. BPF, of course, is an internal virtual machine that allows code written for this virtual machine to be loaded into the kernel by user space and run in the kernel context in a safe sort of way. There's an awful lot of checks that are applied to this code to ensure that it cannot harm the kernel. These set is the hope. It is being presented in a lot of ways as a safer form of C. There's a lot of interesting makings that have been added to allow, for example, BPF code to acquire a lock and simply not and it must release that lock at some point and the verifier will ensure that it happens and if it cannot prove for itself that lock will be released, it will not allow the code to be loaded and run. So it applies an awful lot of safety features that makes it a safer programming environment than programming in straight C in the kernel. So there are other downsides as well. It can be a hard environment to work in. You're limited in what you can do and so on. But for certain kinds of things, you can do a lot. So there's a lot of things you can do in BPF now. Starting with packet filtering, BPF originally stood for Berkeley packet filter after all. So that's for you can load programs to select packets or reject them, that sort of thing. You can write TCP congestion control algorithms and traffic control algorithms, kinds of advanced routing using the express data path, a lot of networking stuff which is kind of the roots of BPF. But beyond that, you can do things like you can write drivers for infrared controllers in BPF. And once you have one of those, if you get a new controller, you just load a new driver, you don't have to build a new kernel or anything like that and that controller works. Similarly with input drivers, if you've got a keyboard with weird quirks or a mouse with weird quirks, which evidently the vendors of these things are fond of weird quirks, you can load a little BPF program that makes it work like every other one. That sort of thing. System call filtering using the secure sec comp mechanism. It's a sandboxing mechanism that's increasingly widely used where you load BPF programs to the side, which system calls a particular process can use and which ones are denied to it. Or you can write entire Linux security modules now using the BPF mechanism, which is a very flexible mechanism for the creation of security policies within the kernel. And of course the whole use of BPF for tracing and analysis is quite extensive in the kernel at this point. We have a lot of observability mechanisms all based on BPF. So you may be thinking that you don't use BPF yourself. I was sort of thinking so and then I asked my laptop right here whether it was running any BPF programs at the moment. They didn't fit on the slide. There's something over 20 of them that were loaded in there mostly by system D, which has become a pretty active user of BPF programs. So for example this one that I've highlighted is a security module written in BPF that is there to restrict the types of file systems that the particular processes can access. There are some file system types, really old ancient un-maintained file systems that are seen as being insecure. And so there's a desire to simply not let people use them even though the support for them is still there in the kernel. So it's an example of how BPF is being used to change the interface that the kernel is offering to user space and to make the kernel something that's a little bit different than what it is now. So this is all stuff that's being done now. This is in production kernels. There's a whole lot of stuff being done out there that could be in production kernels. So the extensible scheduler class is a framework that allows the creation of complete CPU schedulers, the whole thing from CPU scheduling in BPF. You can write the entire algorithm for deciding which processes should run for how long they should run, when one process should preempt another one, which CPU they should be running on. You can do the whole thing in BPF. It's an extensive and capable mechanism that was done by people at Meta and Google, that sort of thing. It's quite complete. There's a lot of reasons why you might want to do this. It allows for easy experimentation with scheduler algorithms. If you're trying to write a scheduler now or tweak the existing scheduler, you have to do your C code, then you have to build a new kernel and boot it and then watch the thing crash and then start all over again. That takes a while. Here, you just simply load a BPF module, see what it does, take it out, load another one, that sort of thing. So it speeds scheduler development quite a bit. It also allows for the creation of special purpose schedulers. If you have a weird workload that needs different scheduling policies than what's normally available, you can write a scheduler to do it. Beyond that, the kernel over the last year or so adopted a thing called the multi-generational LRU. It's part of the memory management subsystem that is used for page aging for deciding when certain pages, certain parts of a processes memory are not being used and can be taken away and used for other purposes. It sorts pages into generations where the newest generation is the most actively used memory. The oldest generation is the least used memory. So when memory has to be reclaimed from a process, it is taken from that oldest generation. So needless to say, if you have an algorithm like this, that aging of the pages is key to it because that's how you make your decisions about which pages you take out and which ones you don't. And if you make that decision wrong, then you hurt the performance of the system as a whole. So there is talk of allowing the addition of BPF programs to do this page aging, to look at what a process is doing, and decide where pages should be placed in these generations. So this is again the insertion of BPF and do a deep core subsystem of the kernel, sort of thing that normally you would not affect in user space. But if this goes in, then you will be able to. IOU ring is a subsystem for asynchronous IO at its core, but it's rapidly becoming an alternative programming interface for the kernel where you can load a whole sequence of operations into the kernel, have them run asynchronously and just hear back from the kernel when they're all done. So you can load a sequence of operations, for example, tell it, open this file, read that file's contents, write them out to that network socket over there, close the file, and just tell me when it's all done. And the kernel will handle it all asynchronously. You can get some pretty amazing IO rates and such out of this. It's a way to speed up IO intensive programs. But the sequencing you have in IOU ring now is pretty simplistic. If you could put BPF programs into there as well, then of course you have a whole new level of control. So there have been patches out there to add BPF to IOU ring. They haven't been merged. I haven't seen a lot of pressure to merge them recently, but I doubt they're going to go away. But once again, this is pretty deep stuff in the kernel. In this context, it's worth pointing out that recently, the extensible scheduler class was rejected by the scheduler maintainer, said, no, we cannot merge this. We are not going to put this into the kernel. Sorry. We're not going to do it. So you might ask why this was done. This looks like a pretty useful capability. And there's a whole lot of reasons that were cited for doing this. Some of which make some real sense. Starting simply with the added maintenance burden on the scheduling maintainers themselves who didn't want to deal with yet another scheduling class and all the sort of stuff that goes with it. There was a whole lot of fear that vendors would use this for benchmark gaming. They would load a scheduler that would make their particular benchmark on their specific hardware go a little bit faster. And seeing what vendors have done over many, many, many years, this seems like a legitimate fear to me. Right? Of course, they're going to do that if they can do that. Along with that, vendors might say that if you're going to run our fancy enterprise software system, you also need to run our special scheduler that goes with it and we're not going to support it. There's a whole lot of fear that that would happen. And again, it wouldn't surprise me at all. ABI concerns, in theory, anything that is used by a BPF program is considered internal to the kernel. So if a kernel change breaks it, then it's the problem of the BPF program and their maintainers to fix their side of things to match that. In practice, if somebody had a scheduler they depended on and then a scheduler changed within the kernel broke it and they screamed, I think there would be an awful lot of pressure to revert that change. That in turn puts a really strong restriction on how you can involve the internal Linux CPU scheduler going forward. And the scheduler developer is really, really disliked this idea. Developers in general like this idea. This puts you into a straight jacket that really restricts what you can do in the future. And people don't want to see that happen. So this is another fear that kept the extensible scheduler class out. And finally, the scheduler developers were worried. This would simply take developers away from the core scheduler. They would make their own little fixes and run it in their own little BPF modules and not contribute their work back into the core scheduler where it would benefit everybody. And so for all of these reasons, this particular work was rejected from the kernel. But this was a decision made by one developer and not by the kernel community as a whole. As a whole within the community, we have not yet decided what is it that we are willing to export via BPF? And what is it that is off limits and that you must do in C within the kernel itself? Where do we draw that line? And so I think at some point we're going to have to articulate that a whole lot better. Or we're going to frustrate our users and we're going to frustrate people who are trying to develop these advanced capabilities in BPF only to be told that they cannot actually get it into the kernel. I think it's not fair to them. I think it's really not fair to anybody to not know where that division is. So expect some discussions on this over the course of the year or two as we try to figure out some sort of policy that says what can we do in BPF and what is it that we should not be doing? Another area that I think we're going to see things talking about is the Rust programming language. Rust has an awful lot to offer. It offers a much stronger type system. It offers a freedom from undefined behavior which of course is one of the real joys that C brings to you. It allows the development of a much safer code where there are whole classes of bugs that are endemic to C code that simply cannot happen in a Rust program that gets past the compiler. So if we wrote a lot of our kernel code in Rust rather than C, we would demonstrably have a safer kernel. There are lots of bugs that we simply wouldn't have to worry about. There are others of course that are going to happen. Nothing is perfect. But we would do a whole lot better. And the other aspect of this that really should not be overlooked is that Rust is far more attractive to newer developers than C is. The number of people coming out of the university who are just jumping up and down and wanting to start writing C code is smaller than it used to be, shall we say. And I don't expect that to change. Now as we saw back at the beginning of the talk, we were doing pretty well at bringing developers into the kernel community. But this is something that we want to continue just not this next year, but the next 30 years and more. We want to continue to have a healthy influx of developers. And allowing them to work in a language they want to work in I think is an important aspect of that. We are already seeing interest from new people, people we've never heard of who would like to work on the Rust side of things. So that is actually an important part of this as well. But I mean there are reasons to not do it as well. Adding a new language to the kernel adds a whole bunch of complexity. You've got a whole new set of dependencies that you need to have to build the kernel, changes the build system, all that sort of stuff. It makes it more complicated and the kernel build system is already a complex thing. The Rust language itself is still evolving quite quickly. Rust in the kernel has to use some incredible number. I believe it's over 100 unstable Rust features right now. Not all of those features will be stabilized in the form they're being used now. So any kernel code in Rust will have to change as the language itself changes. And that is something that worries people. We go very far out of our way to allow the kernel to build with a range of C-compilers, including fairly old C-compilers, so that we don't impose a specific version of the C-compiler on people trying to build the kernel. With Rust, that's not possible. You really have to use the current Rust compiler to build anything in Rust. Maintainers will have to learn Rust. And this is not a small thing. We have people who have been working in C for decades sometimes who are now going to have to learn a new language. And a language that's not the easiest language to learn at that. If you're a maintainer, and people are going to start putting Rust code into your subsystem, you have to be able to understand that code. You have to be able to make changes to that code. You have to be able to fix it before you can accept it. So as a maintainer, you simply have to understand the language. You have to understand it well. So this is a big, big thing to ask of our already overburdened maintainers in the kernel. Rust brings with it a lot of glue code to make it work with existing kernel facilities. And there are some things that are simply hard to do in Rust. If you want to create a simple embedded linked list of which there are thousands in the kernel, you really can't do that in Rust just because the pointers in two directions run afoul of the ownership rules that the Rust language imposes. And this is very hard to do. And there are various other things that have worked around in the Rust language. So it's not perfect either. But there's a whole lot of interesting things even so. And then I have to say it because it's true. The kernel community is somewhat conservative at times. And often rightly so, right? The things that we do have consequences for, I believe at this point, billions of machines out there. So you don't do things lightly. You have to be very careful about changes that you introduce into the kernel. But sometimes also we just are a little bit slow to do things. But if you take a kernel developer who's been working in C for 20 or 30 years and you throw something at them that looks like this, this is the Rust version of what is a one-line macro in C that just gets a pointer to the task structure for the current process. And it's rather more complex. And it looks like line noise if you don't understand the Rust language. That sort of thing. And then that tends to lead to responses like this, where there's a long-time kernel developer who I've chosen to leave nameless for his own sake, who simply said that saying we need Rust is really an insult to the developers who have worked for years and years and years to build a safe code in C. And we really just don't need this. We don't want it. So that's where that stands at the moment. In terms of Rust, the initial support for Rust was merged in 6.1, right? It's enough to write a hello world module and really nothing else. We've seen the addition of support code going in in subsequent kernels, that sort of thing. But as of now, there is nothing in a production kernel, nothing that anybody is actually using that is written in Rust, right? It's all just experimental stuff in support code. But there's an awful lot of stuff going on out there. There's an awful lot of support code and some interesting developments, perhaps the highest profile of which is being the Apple Silicon GPU driver that's being written by Asahi Lina that is coming along. Asahi has been very, very explicit about how doing this driver in Rust has made the whole thing much easier than doing it in C. So there's a lot of images in that people want that. PuzzleFS is a read-only file system aimed at the creation of container images, that sort of thing, trying to do things in a secure sort of way. There's a read-write plan nine file system server. The kernel has a read-only plan nine file system server in it now, but it does not have the right capability, but the one written in Rust does. And a whole lot of other stuff that's going on in this area. Useful stuff that people actually want, that when it's merged into a kernel people will actually use. And that leads to the question, which is the merging of Rust support was explicitly an experiment. It was done to see how it would work out and Linus Torvald said at the outset we're going to put it in, we're going to see how it goes. If it doesn't work we will take it all out again. But that leads to the question, when do you decide this experiment was a success? When you say that okay it works and we're going to keep it. And the answer is actually quite clear. That decision is made the first time we merged something that users actually want to depend on. We have a strong no regressions rule in the kernel. If something works with a given kernel we cannot break it with a subsequent kernel. That's just the rule that we live by. So if we put in say an Apple GPU driver and then we say some few releases later, oh actually this Rust thing doesn't work we're taking it out. And so the GPU driver goes away. We're going to have some pretty unhappy users and we can't do that. We will not be able to do that. So this is the decision point and I think it's coming soon. I think it's coming in this next year because there's going to be increasing pressure to merge this code that has been written in Rust. And at that point I think we will see whatever remaining pushback there is from the people who don't want to see Rust. And there's going to be some interesting discussions I think over the course of the next year. I personally think that this stuff will go in and I think we'll go forward with it. But I don't think there are any guarantees of that. So my last topic when I talk about something that I call the Maintainership Crisis in the kernel. We're seeing maintainers complaining about the job of being a maintainer with the XFS maintainer, the now XFS maintainer, saying that it feels like punishment from people bringing out. So what's going on here? Why are people complaining? Well, there's a lot of problems. The demands on kernel maintainers have been increasing over time as the complexity of the code grows up, as the rate of patches goes up, as there's more to review, more to do. And then people start talking about, say, introducing Rust and making them learn Rust and so on. There's a lot going up at the same time. There's a lot of understaffing both at the maintainer level and at the kernel developer level. So even though we have some 2,000 developers participating in every release, it's not enough. And so, again, you see things like this where Derek Wong is saying that he has friends who work for small companies and so on, and they're used to seeing the sorts of pathologies we see in the kernel community due to the stress of being understaffed and overworked. But he doesn't understand why this is happening when everybody is involved, is working for these 100 billion dollar companies. That sort of thing. It doesn't make a whole lot of sense to him. It doesn't make a lot of sense to a lot of people. But that's what's happening. We see a lot of lack of employer support for the maintainer role in particular. So here's a quote from one of the key kernel maintainers who's saying that he's being a full-time maintainer, but he has a full-time job and the full-time job is not for his maintainership. He's doing that on the side. Many kernel maintainers are doing their maintainer work on the side. I do my maintainer work on the side, right? Nobody pays me to do that. And that is true of many of the others as well. We've been pushing to try to change that. We actually added a document to the kernel tree a little while back where people can evaluate how well their company is doing it at supporting maintainers and then maybe take that to their boss and try to make it better. We're having trouble with kernel fuzzers. That sort of thing. Fuzz testing is a great way of finding bugs. It is a very valuable tool for the kernel. It's found a lot of bugs that we have fixed. But fuzz testers are also generating thousands and thousands of bug reports, many of which are not seen as being particularly high quality bug reports, but people have to look at these and triage them and so on. This adds to the demands of all that sort of stuff and so on. So there's a lot going on there. What it really comes down to is that really like every other part of the open source community, there are a lot of dark areas in the kernel development community. Even though we have hundreds of companies supporting every kernel release out there, they're just, they're big areas of the kernel that no company says, this is our problem, we need to put resources into supporting it. It happens all the time. So I must of course mention documentation is one of those. We have 2,000 developers participating in every kernel release of something 4,000 to 5,000 developers over the course of a year. There is not one developer whose job is to create documentation. And our documentation reflects that. It really does, I hate to say it. The build system, the kernel build system is a complex thing, is maintained by one person who's doing that. If that person gets fed up and leaves, I don't know what we're going to do. A whole lot of areas of the core kernel itself are under maintained as a result of this driver's role or hardware. A lot of companies are happy to support their current hardware, but a couple years on when they're selling their new big thing, they don't really care much about supporting that older stuff that they really wish people would just throw away and buy new things for anymore. So somebody else has to maintain that at some point. And then of course the maintenance role in general, which is a separate job from development, is undersupported in the kernel community, as it is, again, anywhere else in the open source community to tell the truth. And so this is a problem that we have. Scott McNeely, for all his faults, actually said something useful a while back, which is that the open source is free like a puppy is free. You can certainly get it into the house very quickly, but if you then say, okay, I'm done, you're going to find messes on the floor and all your shoes are chewed up. The same thing is true with free software. You can get it for free, but you have to pay attention to it, or you're going to find some messes where you really want to find them. So I leave with the final question, which is, how can we take better care of this particular puppy? Because we have a real lack of support for a lot of areas that we need to support within the kernel, the maintenance role and beyond. And with that, it looks like I have about exactly one minute for questions if people would like to ask questions. And I thank you very much for your attention. Question being that since I was showing 200-300 developers entering the kernel community every release, while the number of developers growing much more slowly than that. It isn't growing, but it's much more slowly than that. That implies that a lot of developers leave. And the answer is yes. Obviously, people leave. Of those 200 or so first-time developers, probably about half of them, I haven't done this analysis for a bit, but probably about half of them are one-time people. They come, they find a particular thing that irritates them, they fix it, and then they go back to whatever their real job was. That sort of thing. Others stick around and then people come and go over time. It's a big community, and so there will be some attrition as well. We are growing, but certainly not at that rate. Does that answer your question? Anybody else? So how much BPF is used by? Are there stories of attackers using BPF? You know, I don't personally know of them. I'm sure they must exist, right? They will use every tool they can get. BPF-2, of course, has had a vulnerability or two of its own. Systems of system that complex is going to, that sort of thing. I don't know that they're using it, but I'm sure that that is happening. Yeah, you give people a tool and another entry point in the system. Of course, to load a BPF program in general, you already have to have some sort of root access for almost everything. Not everything, but almost. So usually at that point, you've already won the game, but there may be situations where they're using BPF as well. All right, at this point, I think I am out of time. I'm still next speaker set up, but I thank you all very much for your attention.