 Thanks. Okay, now it's recording. I can't see the Q&A, but I can see the chat, so somebody's got to tell me to go look at the Q&A, play the screens. First off, these slides are all online. They'll be given to you as well. You can see them. It's based on a talk I gave a few months ago at the Kernel Recipes conference in Paris. It's a really good kernel conference. Small bunch of developers highly recommended all the talks and slides are online, and I'll go from there. This is trust in the Linux kernel. I'm built on the previous talk. This is going to be a little bit more specific about how our development model works, how we create trust, and how we do testing, really. This is trust is about testing. Of course, disclaimer, everything is this is just me. It has nothing to do with the Linux foundation. My contract says they can't tell me what to do, and I can't tell them what to do, and that's what this talk is. That's my personal opinion. As far as personal opinions go, let's talk about the big one at the beginning. Open source software is more trustworthy than anything else. It's been proven by other people. It's been proven by lots of other things, but the main reason it's considered trustworthy is not necessarily that the code is without bugs or works better, but it's that you can audit it. Somebody can audit it. You can go look and see how it's supposed to work, how it works, by looking at the source code, and at any point in time. And also, most importantly, you can fix it. You can fix it in time. You can go back and look what happened in the past, which is a very important thing. Lots of problems come up and are issues that because you say, oh, somebody three years ago did some bug so-and-so somewhere, and it's been proven in closed source products that they find out something happened a number of years ago, and they think something became untrustworthy. Well, what happened really there? You can't really know if you have the source code, you have the history of all the source code, you can tell. You can go back and look, you can rip it out, you can audit it, and you can fix it. That's the main thing. And this really came to light and to the rest of the world when we have the University of Minnesota episode that they're trying to submit some patches were purposely incorrect, and they purposely were incorrect, and yet they sense correct patches at times. It was a very ironic episode. They lied about the research paper. They did some other different stuff, and it's really basically how not to do research on an open source community. And I gave a big long talk about this. I'm not going to go through the details. That's my internal recipes talk. Go see that if you're curious about the details. It has a history, you watch through what happened, watch through what happened after the public announcement, how we audited everything, what's going on now, and how the kernel processes has changed since then for researchers is put a little more onus on their work. They have to work more in the public eye. They can't work privately. They can't try and sub stuff into us without telling us what they're doing, which is good because we know what we're doing and we want to work publicly and we want to work together in a community. And you don't do research on public communities without telling them that it's basic simple of that. Anyway, that's a link to the talk and the slides. Go check that out if you're curious. But the main reason again is the whole episode proved that you can go back in time and you can audit code based on new information. All of a sudden we work feeling that patches or changes submitted by a certain group of people were now suspect, right? Whether that's true or not, we weren't there to judge, but we can now go back and audit to see whether it was true or not, right? Did they submit bugs? Were these things incorrectly? And it turns out majority of their changes were incorrect. Now, whether that was due to negligence or just not good developers or they're purposely doing wrong things, I'm not going to answer that question. That's up to them, but you can go back and make the decision yourself and you can audit. You can rip it out. You can change it. You can do that. And we did that based on new information. You can go back in the past. And again, that makes a body of software more trustworthy because new things come about all the time. You want to be able to go back and audit it and detect that, right? That's just the way things work. That's a good reason. So let's talk about trust. Trust when it comes to Linux. When it came out, the main thing is this is our license. The license of the kernel is saying we have no warranty. You don't trust us, right? There is no trust. You should apply no trust to the software that you're given. It is up to you to use it as is. We give it to you as is. You accept it as is. There's no legal responsibility here. So legally, there is no trust involved, right? That's not that. Well, that's great. And that's fine. That's our license. But really, we want people to use our code, right? You want people to use the code, right? So we have to give them some kind of assurances that we're doing things in a good way, right? So while it is a cop out, legally, we don't have to provide it. When the University of Minnesota thing came out, lots of people were instantly saying you have to verify everybody who submits the kernel. You have to know who they are. You want to track who these people are. And based on who they are, then you can trust them, right? Because you should be able to trust some people and not trust other people. People should be different values of trust. What should you do? And that people, that's a normal knee-jerk reaction. You have verification of all your employees, of all your submitters. And that's just wrong. And I'll show you why that's wrong. This is a very naive model or reaction to a problem that this doesn't solve at all. Let's talk about why this is true. This is our development model. I've given this talk for 20 years now. At the bottom, there's developers. We have a lot of developers. We have a number of chairs and numbers. 4,600 developers last year, 2021. 1600 different maintainers for different parts of files or tiny subsystems in a kernel. We have about 350 different, well, trees. I think we have, yeah, 350 different subsystem maintainers. And we have Linus and Andrew, and everything gets merged into Linux next. This is the way the kernel development model works. That's from also almost 500 different companies. So first off, step back. If you want to verify everybody that you, all developers that you have from 500 different companies for almost 5,000 people every year, how would you do that? What kind of process would be involved there? All different countries, anywhere around the world, what would it be involved? That's impossible. You could probably come up with something. If you want to come up with a process to do this, you would instantly stop all kernel development. So that's not really a good idea to try and verification the people. So just the basic proof of identity wouldn't really work. Now you can do key signing and try and verification of trust and whatnot, which we do for our subsystem maintainers. In order to be a subsystem maintainer, you're a verified person, you have a key, you can tell when those people are submitting what they're submitting. Linus takes a signed pull request from those people. We do at the upper levels have a verification of what, who is what, where it's coming from. So we do have that, but for the 4,000, 5,000 people, it's not required. It's not ever a responsibility to do that. So here's our statistics. Yes. Somebody has a hand up. Okay. Bottle, would you like to unmute yourself and ask the question if you have a question? Looks like we have few hands up. I don't know if they have questions or they are sure. Accidental hand raising. No, mom. Thank you. We continue. Okay. Thank you. Okay. Okay. Sorry about that. That's fine. It might be it's easier to type it in the chat or in the question and answer. That way we know it's a real question. Right. Cool. Okay. Please interrupt. If you do have questions about this, I'm going to go fast. I have like 55 slides to get through. Please interrupt me and talk about this. I'd rather discuss it. It's much more easier. It's hard to do this webinar, but questions are good. Cool. Okay. No questions. Okay. Let's move on. So this was our developers last year. We had 79,000 commits. I live in Europe, so dear decibels or comments. Sorry. I think I said a few different places in this talk. That was a lot of commits. That was a lot of different commits. A lot of the way we do commits are we mark that this fixes a certain other commit, right? We want to show some identity or some, this was a bug fix and here's what we fixed over there in the past. So we have a way of tracking what we fixed when. So 13,000 of those commits were marked as they fixed something previous. So 17% of all our changes that we accept into the kernel are fixes, which is good because I mean the rest are new features or just other IDs or other things like that. So that's about a normal for fixes for a project. But this is after they've hit our trees. This is after they've been merged into a maintainer's tree or that's gone through our testing. So there's a lot of stuff that happens way before that these ever hit here. So we do, and I'll talk about how the process works, weed out a lot of stuff way before it gets to this point in time. So this is the seemingly high, but it's also a good representation of testing. The big problem with the general purpose operating system testing it is it has to work for everybody in all different situations. So we can test only under certain limited situations for my machine, your machine, your device, your product, the cowl milking machines, you can test under those situations, but then you fixes come up for that specific scenario, you get the merge back in. That's the way. So testing for your specific use case is key. We can only do so much on earth. This is found, okay, again, after they hit the subsystem tree, but 26% of those 17% were actually issues before Linus did a real release. So we do, the way we do releases is RC1, release candidate 1. Linus takes all these merges from all these subsystem maintainers for the past three months for the work in two weeks. And then it's from then bug fixes only for the next five to six to seven weeks. So a lot of times we're fixing things that went into RC1, so that never actually got into a real time. So our testing processes are catching these, we're seeing build regressions in certain situations, we're doing other types of work. And so the number of real fixes for previous kernels is smaller than 17%. You can do the numbers and figure out the math there. That's what we're running at, whether this is good or bad. Nobody's ever really tracked this stuff before. I see the numbers based on what people assume for commercial development. And this is way, way good, much better than that. So that's all I can go by. Also, in 2021, 12% of all commits were for problems in older releases, not just the current release. Again, that's the way these things work. So when you go back in time, you look back in time all the way as far as our history goes. So this has always been there. It's not John Corbett on the next week of news. Reports shows fixes and where they come from over time. It's a nice little graph. They're really large from the current kernel and then back down and slower. And there's a tail off and there's a bump at the very end that, or fixes have been there for all, basically at the beginning of Git history, which is 2.6.32, I think. Anyway, it's old. That's the way those things work. Yes, thank you. So let's talk about who did the work. Everybody likes to see this. I don't think we did a report this past year. These are the people that did the work. These are the people that commit the changes. I say it's necessary. This is quantity, not quality. That being said, there's a lot of good quality in here. And I'll call out some people like Lee Jones and Geert and Colin. All do bug fixes. They're doing clean-ups, code fixes based on reports, based on unmatched testing tools, janitorial tasks that are fixing up warnings or fixing up build warnings that aren't fixing up build warnings for new compiler versions. This is all janitorial good development work to keep a code-based solid and work. I mean, working well. And then those fixes get propagated back to older kernels. So we can build older kernels with new compiler versions. That's a big thing because kernel versions live longer than compiler versions, say sometimes. And we keep newer kernels working with all the browsers as well. But so a number of our top developers are actually doing bug fixes and bug fixes for tiny things, bug fixes for good stuff. And this is basically who did the work last year. So let's see who's doing the fixes. Some people mark things with fixes and people don't. Dan, great work. He has a bunch of static analysis tools. Marks things with fixes and does that aren't again like the word Colin. All these other people did a lot of fixes. These people fixed a lot of stuff. These were commits that were merged for that. The percentage is the overall percentages of all the commits in the kernel for that year. So if you go back and look, so Kristoff, the number one contributor to the kernel only did 1.2% of all the contributions. So it's a big curve the way things work. The very healthy curve. We have a very vibrant and healthy community. We're very, very lucky as far as open source community scale. Our community is huge. We're very happy. Keep growing. We're very pleased. It's working well. This is a good sign of a solid, good community. So again, top fixes. So then I went back and said, okay, who are these people fixing bugs for? And no, that's French. There's English. I don't want to call these people out. Everybody writes bugs. But I will say that a number of people in the top 10 contributors that had things fixed are our core developers. And that only makes sense if you think about it. The number of people who, the more code you write, the more bugs you write. It's a percentage of the quantity of what you do. So if we were to try and prevent our people who write the most bugs for contributing to the kernel, we would stop contributions from our most prolific and most trusted developers. And again, the idea of you need to know who is your developers. We know who developers are. You can look. These statistics are there for everybody. Who writes the most bugs? You can look. Our most prolific developers. I write a lot of bugs because I've written a lot of code. It's a percentage one, right? Am I a bad developer? No. Am I here to fix the bugs I'm reporting? Yes. And that's the key. That's a trusted developer. Somebody who's going to be there to fix this. And I'll talk about this a little more. But again, the idea that you should try and keep people who have a history of writing bugs out is not necessarily a good idea. Now, you go with percentages. You can argue percentages and whatnot. But that just does not work. Okay? We're all human. We all make mistakes. I make mistakes. I've written a lot of fun security bugs. I fixed a lot of security bugs. Maybe I'm hopefully one more than the other. Who knows? Again, you can't keep people out of it. So again, the most prolific developers for any project, closed, open, anything, will write the most bugs. People forget this. That's just the way it works. That's kind of fun. So to think that you can find out who is trusted and who isn't, your most trusted people are writing the most bugs. So much for that model. So what do we do? The idea to solve this is to make it easy to find them and to fix them. That's the goal because everybody's going to make mistakes. Everybody's going to find problems. You want to make... I want you to find my bugs. I want our infrastructure to find the bugs that I write. And I want the infrastructure to find bugs you write. And if we make that easy to find these bugs and fix these bugs simply, then that prevents malicious or unintentionally malicious... You can't tell the term between two problems from getting in. And it's as simple as that. It makes the project healthier and work better for everybody if you make bugs easier to find and fix. So let's talk about life cycle of a kernel change. First, is there any other questions on this? No more questions, Greg. There were a few other questions I answered. Okay, cool. Thanks. Yes, those are good links. All the maintainers are. We have tools. Let's talk about life cycle of a kernel change. A lot of people don't seem to understand this. How patches actually get into the kernel? What steps they go along the way? They see from one point of view or another point of view or just all big black box. Let's talk about this. This is going to walk you through it all. So first thing, email. Send a patch. That's it. We have tools to help create a patch. We have tools to help send a patch. Email is the lowest barrier of entry from anybody in the world. There's no... It's all done in public. We have no central authority where you have to log in or not log in and keep identity. None of that's there. We can verify. People don't realize that our patches that we send through email are verified. You can... They're cryptographically signed. My patches that I send out, you can verify that I sent them. But the person owned that has the key by me sent those patches. Email works great for this. You can verify them. I do verify them. I verify when you send them out that you actually really sent them from the correct domain. When an employer domain works, things like that, we verify the patches when they come in. Email works wonderful. It's the lowest possible barrier to entry. I don't think we can get it even lower. Everybody's like, oh, you want GitHub and stuff? Well, GitHub has a long, arduous process to sign up for an account, right? Not all countries can sign up for GitHub accounts. There's also legal issues, ramifications like that. Email. Send us an email. Away we go. You can't send it known anonymous patch. Other than that, send it to us. Very safe. We review this stuff in public on the main line list and either we reject it. Well, we usually reject it. Most all changes are rejected. You can also see the status of things, the password tools we have or some subsystems for the whole kernel. We have those types of things. You can see what's going on with that. And that's the way things work. One third, no, a patch takes, I think I'll talk about this more, at least three times to get through that. Because again, you're going to resubmit it based on the review. Resubmit it again through email. Talk about how you document what's changed from the previous one. Because like some people like me go 1,000 emails a day. You need to do something with them. Other maintainers have smaller numbers. The networking people get a ton of patches. You need to say what's changed from the previous one because I don't remember. That was 200 patches ago. And then we can go from there. Our tools show this also as well. But properly document them, which is a good indication that you knew what you were changing. And you took everybody's review comments into the effect. Documented. It's all plain text, simple editor, pick whatever editor you want. You can use git to do all this stuff or not. A lot of people still don't use git. It's up to you. You don't force any use of any specific tool that way, except any random email client. That's all you need to do. Resend it. Again, average change takes about three attempts. This is based on numbers, some subsystems I work on. The driver core, USB, TTY, serial, even staging and cleanup patches take about three times to get right sometimes. This isn't the number. Some go through right away. Easier to develop. Some developers have more experience. Some even more. I looked on the list today. I see some which is on version 25 of the patch series. I think we've gone up to 52 in some times. I think there's some big, there's some CPU bug series or feature series in the 50s or 60s. It took a couple years. That's just the way they work. It's nothing out of the ordinary to keep it ongoing. That just happens. Review happens as it goes. You want us to reject patches that aren't correct. Take the review process as it is. Keep on moving forward. This is the thing that people don't realize. Every time you email it, bots run on this thing. We have testing that runs on the system. If those emails, if those changes or those bots find problems, they'll just unmet and reject it. I mean, me as a maintainer, I love seeing messages like this. This is an example of messages that just came across from the kernel test robot that said, hey, Johan, this kind of causes problems. There it is. As a maintainer, I can go, okay, great. There's a problem. I don't have to worry about this review. It needs to be fixed. He'll take care of it. It will go on from there. Sometimes these are wrong. There's all the information in there. But your tests, your patches are run on these bots. And they automatically run. So you'll start seeing sometimes there's a delay there too. Sometimes reviewers get to them faster, point things out before the bot can. But everything you send to the public gets tested. I'm sewn automatically. And people don't realize that. We are testing. And I'll show you what we test these things for. It's a lot. And this is the kernel zero-day bot. Intel runs the thing. They say it provides a one-hour response time. That comes and goes depending on how the model is working and how the review process is going and what their back end is doing at the moment. Every patch is tested. All goes along. Test the covers, the branches of the developer tree. When I merged up into my tree, it also tests it. I'll talk about that as well. So even after I accept things, it'll also retest it. And then I don't allow those to move on to the next stage unless it passes those tests. Because sometimes I'll take things before I went through the email response. And sometimes I'll take things and they'll interact with other things that I've previously taken at the bot. Things like that. Build a bunch of static analysis tools. It's listed on the web page, what they support. And also when the things fail, they'll start bisecting things. So in a tree like the stable trees, they're old and things like that. It'll start running new tests and say, hey, there's a problem somewhere. It'll bisect down and say, here's the problem at this commit. Email everybody involved and the way it goes. And it also does performance testing. Test benchmarks. The memory management and the IO subsystem, you'll see some emails that go to mailing listing. I saw a regression X percentage on this benchmark with this passion. And you're going, okay, why? And then you need to work on it. And sometimes they'll bisect it down. This is really good stuff. This is stuff being pulled from the mailing list. This is stuff pulled in our developer trees. It works. Really, really good test. This weeds out probably 90% of the most obvious problems as it is. Look on the mailing list. You'll see this bot just turning away and responding to things. The wonderful thing. Great tool. Intel does a great job on this thing. We're very, very happy with that. They keep adding more tests. You can add tests to it. You can say, please sign up on my development tree to it. So you know, have it run on your tree before you send it off to the public. It'll do that for you. It was a great thing. You can add lots of stuff. They run a lot of tests too. It doesn't say how many. So on the bottom here, all the developers, the zero-day bot runs on everything that's sent publicly. Again, that's why we send work publicly. You can see who gets things sent or who sends what in public and then the tests happen in public. That's how we do good stuff. Then just keep on going. Say me as a maintainer. I accept it. Looks good. I'll add it to my tree. Then the real testing starts. Zero-day does a good amount of testing, but there's a bunch of tools that we have that starts kicking in in the background that really starts doing really good work. The big one that runs is CurlCI and zero-day. Zero-day, again, runs on our developer trees. On the blue line, all these subsystem maintainer trees, we have about 350 of them. They start churning away on these stuff when we push them out. Emails us if things are good or bad. CurlCI also runs on these trees and starts reporting things and starts really finding problems. And CurlCI is great. CurlCI is now a Linux Foundation project started by embedded developers. And it's community-led. It's run by a whole bunch of different companies or contributing as well. You can contribute. All the tests are open. All the infrastructure is open. And it allows all of us to test in one, in a way just like we do development. All in the open, all together collaboratively. Contributing to a common set of end goals. So the kernel community, me as a maintainer, I don't want to have to go test or look at five, 10, 15 different trees or different locations on the web to find out what happened if these trees are good or not. I don't want you email any five, six different reports from different subsists or different tools to see if it worked or not. Just give me one. CurlCI is a collaboration of hopefully, in the end, all the really good testing and all the really good way to do this stuff. Right now we're tracking 62 different branches of different kernel development trees is doing 13,000 different build and boot tests on Unreal machines. All real machines all around the world. There's a whole bunch of architectures doing it. Microsoft and Google offer up a bunch of build in the cloud. So we do the builds up there in the cloud. Some of the devices test, build, boot, run to some tests. They're adding, trying to add more tests after you boot. And then reports back to the systems. And then we can go and write some all. Makes a little pretty report. You can click on them online. You can see the current dashboard there. See what the current state of everything is. But the neat thing is it's all real devices. Some are emulated. These are real devices out there in the real world. They're being tested. Your devices, you can add a lab to it. People like all of a sudden we got a whole bunch more devices showed up and we're like, what happened? And it was a lab from a company and I think you're just added, okay, now we're going to start pushing on our test publicly. You can hook up your stuff to it. Works really good. Very happy to see that. This is really good. If you have a device that runs Linux and you want to make sure all the future kernels run on it, add it to this. You can add your own device to the system. Keep it going. And then that makes you now get reports based on if the latest kernel and the latest tree on the trees are having problems or not. It's a really, really good thing. You don't want it to break. We don't want it to break your system. Now we can have an early warning system. This is what happens. These are really good stuff. Gregi, if I may, you can also go in and submit your own tests. They will look through the kernel CI GitHub process. They will add those tests once they review and add them. So that is also possible with the kernel CI. And so that is true with ZeroBot as well. You can always request more tests to be added. The people that maintain those rings will look through all the tests and add them for you. So you can add devices and tests for those. That's a really good point. And as she was the maintainer of the testing subsystem, as more tests are accepted by her into the kernel itself, those automatically gets run by these tests, these systems, which is really good. Because then sometimes it finds problems with older kernels. We have newer tests added. We find problems with older kernels. We fix them. We move on. So get your tests to the kernel tree itself or number itself. There's other file systems tests in a different package, other things. But the number of networking, the number of BPF is large. We have a huge number of tests now. Get your tests into the kernel tree. Cool. And also the nice thing about the kernel CI is it created a common way to report tests or report things have happened. So because when you have tests coming in from different testing systems, you want to be able to conglomerate them in one spot. They've created the framework. Google and Red Hat, I think the Sysbot tests now contribute to this. Red Hat's been doing some CI testing on the off-stream kernels, which is good and that feeds into there. And of course, all the kernel CI stuff feeds into there. So again, it's one unified location that we can see all the results while we get our stuff. There's some more effort going on in here to get other groups involved and to participate. I think they're trying to feed the zero-day stuff into there. I think that might be done soon. And there's a few other external testing companies and stuff. And I'll talk some more about how the stable kernels are tested. There's a lot of other stuff happening there. And they're going to eventually feed into the kernel CI stuff. Ideally, I just get one report. Kernel CI, any maintainer gets it. Here's all the problems from all the different top, all the different testing systems. And we can go from there. Because that's the goal here. You want to conglomerate and make it all unified. So that's a lot of tests. 13,000 tests run on your changes all the time, every day. And then when we get to Linus, we're going to get to Andromormon and Linux next. So Linux next, this happens every day, merges all our different subsystems together. That's what's going to go to Linus on this next release. Not the current one. And so kernel, zero-day runs on that. Kernel CI runs on that. Gunter and LKFT run on that as well. LKFT. So all these changes merge together. And so it's a common area that we can all agree is going to be what is going to happen next. It's a good place to test independent of the subsystems as well. If you're a testing system, it's easier to test that. And LKFT is from Lunaro. Linux kernel functional testing. I wish it was more with kernel CI. Currently, it's independent. We're working to try and get back together. But right now, Lunaro is sponsoring this and the member companies aren't sponsoring this. So you can't really look at gift horse in the mouth. They're doing a lot of really good testing for me in the stable releases. Linus's RC releases. Now I'll run through this. I think Linux next is also on a daily basis is running this stuff. There's a tool called TuxSuite. You can click on that and see if it's a Perl or it's a Python testing framework that you can submit jobs to the system and get it back. It's all open source. I think you can even run it on your own back end if you really want. The cool thing is there's a lot of testing in there. And it does a lot of build, a lot of permutations of all this stuff automatically. 125,000 tests are being run currently on these things. Quantity isn't everything. All of the matters. There's a lot of stuff in there is happening. It's doing build and boot and functional testing after the boot happens. It's not doing all architectures. It's doing a subset of architecture. And it's doing a subset of configs and subset of compilers. So it does test a few different compilers like all the client developers are using that to make sure things keep on working well and that's doing really good stuff with them. It's not everything. It's not as much as some of the other testing system, but it's something and we're happy about that. And then Gunter is a subsystem maintainer of a driver subsystem, the kernel. He's been doing this on his own for quite a while. And he has his own test build system that is public. You can download it. You made it available for everybody else. I know some of the other people are running it locally. He actually builds all architectures that the kernel supports, which is awesome. And that's really good. I think he's the only one that does that. And he boots them on all architectures that the kernel supports using QAMU, not on native machines, but emulated. And that actually finds a lot of problems that we wouldn't be able to find otherwise. A lot of build problems on odd architectures, a lot of build problems on some really funky old configurations, and boot time on configurations and systems that we don't have elsewhere. Even though it's emulated, it's a really good test. He's found more problems in like the old ARM32 code than anybody else. He beats the PowerPC people by testing when they have their own tests on the real systems. It's invaluable for all the stable testing, but those are running on Linus's tree, Linux Next on a daily basis, and all the release candidates that come out to the stable releases. Really, really good testing. I couldn't do that without it. He runs it at a certain time at night due to the cheaper power where he lives in California. Hopefully, he gets somebody to sponsor that someday, but it's a really good system. So again, zero day kernel, CI, LKFT, and Gunter are running on this all the time, 20 hours a day, all the time. So this is how we feel assured that you should be able to trust these testing that is all done in the public. Testing is done and communicated tests if you have problems. If things hit Linus's tree, and then you can go out and more people test those. The only, I don't test Linus's tree. I test my subsistence tree, right? And I test my subsistence tree, merge with Linus's tree. So then I'll see different bugs that might happen that way. That's how things work. And then you find a problem in the interaction and go from there. So our number of bugs found even before it hits the subsistence tree is huge as the other testing is found. Linus's tree is then we go on and then we go on to there. And also big time is we're seeing more and more subsistence come through and say, hey, thanks for the bug fix. Can you also write us a test at the same time? I saw that yesterday for a really weird, nasty CPU architecture issue. They're like, okay, great. We'll take this only if you submit a test too. So they added a test to the framework. The framework was easy. We have a standardized framework. SUA maintains. It's really simple to use. I've written a test for it and it gets out of there and then we make sure that bug can never come back. And that's the key. You don't want to have a regression. You don't want to have a way to do that. So submit a bug fix, submit a fix for a test and away you go. It's hard to write tests for drivers with specific hardware. That being said, there are driver subsistence that have tests. The DRM, the graphics drivers, they have whole emulators of emulated GPUs and whatnot and frameworks that they run that I don't even talk about here at all because they have a whole separate testing infrastructure and framework and dashboard that they run everything through. And if you ever submit a patch to it, just start getting automatic responses and tell you, go look here and what happened there and how that goes. They do a ton of work in that way and they can emulate devices. Those people, there's some other work being happening so that you can emulate other devices in Python and user space tools and we'll eventually roll those into the kernel itself as well. So we can test virtual devices. Again, you go and test virtual devices. Sometimes there's bugs in hardware, as you know, so then you can have to emulate the bugs in that your model in order to get your driver to work properly, strictly that way, but we're dealing with hardware. So that hopefully should give people an idea of how we test. And then also let's talk about stable kernels. Really briefly, stable kernels happen this way. My branch, I take a patch. I take Linus's tree and start doing stable releases. The rule of that, the fix has to be in Linus's tree before we work with a stable kernel. I do a release candidate and I do about once a month, once a week for a different number of kernels that I'm maintaining out of the time. And then I get reports back. So I get reports, even more reports than Linus's stuff happens. Hopefully they'll switch over time. Again, kernel CI, go into Roshua, that's our own testing. The Android system start kicking off and report back bugs to me. Huawei does a bunch of testing. NVIDIA was testing Devian Fedora. And there's a number of other chip companies that send me private reports after, usually after the merge come out saying, hey, everything's working good. I get some, one company sends me something once a month saying everything looks good or no, hey, there's a problem over here and they bisect it down to that fetch. This has a problem here. Can you look at that? Which is great. If you don't want that posting publicly, you don't have to, but it's nice if you do. But for stable kernel releases, it's really good to know that you're testing this stuff and verifying that things work properly. That's really, we have a lot of testing happen. And when we do the releases, you can see all the tests to buy by the different groups. There is a question in the Q&A box. How soon have zero-day vulnerabilities in stable kernels been patched in the past? And is there a page maintained to confirm perversion vulnerabilities? So zero-day stuff, meaning we got the fix as soon as we got our report and we fixed it today. If you look at how the kernel does security issues, we treat a bug as a bug as a bug. We don't call out security issues per se to the public. So we fix things, also a lot of times, we fix things because we don't realize there's a security issue or not. We fix it and we move on. So we fix it, merge into Linux's tree, merge into stable trees, push it out and move on. We don't deal with CBEs. I gave a whole talk about how CBEs don't work with the kernel, Mitra, the CBE organization from the U.S. government agrees that the kernel doesn't work well for CBEs and say they don't even give me CBEs anymore. So we don't track these things. We don't have a page perversion of what is fixed. We have open change locks. So what I do say is we fix things before people realize that there were a problem. They android the Google team a number of years ago did a research and they said every single one of the reported vulnerabilities that they found were fixed in the stable kernel before they were made public. And that's what's happening. So we fix things every week, before we even realize there's a problem, either on purpose or because we can't tell you, get them update and roll out. So always take the latest stable kernels. That's my root mantra. We're fixing things in there all the time. If you're not using the latest stable kernel, you have a vulnerable system. I'll tell you that. I completely admit that. So the easiest way to understand that you're up to date and you have patches for all the latest known problems is use the latest kernel. It's as simple as that. We have to guarantee that we're not going to break user space. You should have systems in place that you can test the latest kernel or any kernel update, be it a one line fix, be it a major upgrade to a different newer version. It should be the same process for you. Push it out and go. Android has the latest upstream LTS kernel available for all devices in the world, usually within a day or two, usually pulling from there. If you're relying on Android, it has all the stable stuff in there as well. If you're a distro, number of good distros do this well. Fedora does this amazingly well. Devian, amazingly well. The majority of the world runs on Devian systems. Everybody doesn't seem to realize that, but I think it's huge, huge quantity of Devian systems. Fedora, Arch, OpenSUSA does it. SUSA Enterprise, Backport spits and pieces. REL doesn't backport anything at all. That's me arguing with the REL people. You talk to them and then justify their reason why they don't take a fix. Yes, so just take the latest stable kernel. And that's what you're guaranteed of all fixes that we know about. Thank you, Greg. Sorry, that was a rant. I've given the whole talks on this. That's good information. So trust in us and our development process is we trust you, so we verify that the change works properly. It's the only thing we can do. Trust, verify, and trust and test. And that's what I want my tests for my patches that I submit to be tested and proven that other people can trust, not only that who sent it, which who sent it doesn't really matter as we've come out to see, but trust that it tested and got it right. But it also trusts who sent it in that the real trust model in the kernel, and I've always alluded to this in different talks, is that we're only, I trust, I will take changes from a number of people, okay, it's from these people, great, wonderful. I trust that not necessarily that they got it right, because we're all getting things wrong, but they'll be there to fix it when they get it wrong, because we all get it wrong. We're human, we're fallible, we all make mistakes, we all don't know how things work at times, we'd all make just foolish mistakes. And we're hoping, if I trust that you're going to be there to fix it when it goes wrong, that's the ultimate trust model. That is the Linux kernel development trust model, that you have the ability to fix the problem when it comes up. And that's how things work. And that's it. That was fast. Questions, comments, tackles. There is a question in the question and answer box, I'll read that, if we use... If we use the latest kernels sometimes, oh yeah, I can read this, sorry. If we use the latest kernels sometimes the distribution Linux patches do not apply, how to fix that? Number one, you don't have to use Linux distribution patches. You should be able to run a stock kernel.org kernel under your system without any distribution patches. That being said, you can't just merge in it. There's very good ways of doing patch management, merging things in, using quilts, using gifs. It can be done. Sometimes it gets complex depending on how these outer tree patches for the developer develop, sorry, the distribution are, but I would push back on the distribution and say, why aren't you getting your changes upstream? Why am I relying on you to do this work? Why has this been rejected by the community? Why is this not upstream? We found a number of problems in some distributions without these tree patches that were never submitted upstream and that were proven to be vulnerable. You want the review of the kernel community. So get them upstream. So push back on your distribution. If your distribution requires that or says that, switch distributions. There's nobody forcing you to use any distribution. You never forced you to use anything like that. There's lots of other good distribution out there. I'll call out Debian. Great, great work. Fedora opens you as a arch and embedded-wise, use Android. All the Android kernels are right there for you. Android even merges into your vendor SoC mess and those merge in and build and boot just fine. That's tested on a weekly basis. There's no reason you shouldn't be using Android common kernel for Android embedded devices. It's just right there. What body work I see sponsored? Kernelvilleman is sponsored. I mean, we all work for companies. The old joke used to be you send five patches to the kernel and you get a job. It's not an old joke. We all are sponsored to do this type of work. I would like to see more companies help provide the testing infrastructure. More companies work with KernelCI and help them out to do that. It can be just as simple as start sending your test results to KernelCI. You're doing the testing anyway. Start using it. If the interchange that we have for the format for test results isn't correct, work on it. Help us out. We're glad to do that type of stuff. As far as development, I mean, everybody contributes to the kernel in a selfish way. That's fine. We want that. You're contributing to the kernel because you have a problem when you want to solve it in a certain way. That's good because it turns out everybody has the same problems. I call out the old adage of power management. The embedded people said we're special and unique and we have to take this in a special way. We said no, do it generically. It got merged. Everybody accepted. Your devices work great for power management and data centers save billions of dollars in power thanks to the power management. It works for supercomputers. It's just as good as embedded. We all have the same problems. We're all special and unique people just like everybody else. Okay. What drink am I drinking? I'm drinking water right now. It's too early for one. I do live in Europe. Oh, academics. I've had a lot of work in this recently. One reason I moved to Europe was because of academics. I worked with a university in Paris. There's a lot of applied research development happening out there and a lot of it gets merged into the kernel. A lot of groups and a lot of people do understand this. Julia Lule has fixed more security bugs than anybody in the world. The kernel thanks to the work she did in academia, she's a professor and the tool that she created, Coconel, that we use to do static analysis and fix those bugs. That is applied research. Very good stuff in the kernel. Works really well. The real-time operating system changes came from academia. It was an interaction of academia and industry and kernel developers knowing how to work on this stuff together. Tons of papers were published on it. It was all merged into the tree. So academic work happens really well. I live now in the Netherlands. Right up the road here is the academics who found Spectra and Delta and found Replete. They found other types of security stuff. They're doing hardware security analysis of CPUs and how that affects the kernel. We work with that type of work. We fix the problems up and we go out of there. They do a lot of operating system research there. They do operating system research in other parts of this country, in academia. There's a number of universities in the U.S. that do this as well. Other developers interact with them. It happens a lot. I'm very happy to see that. A lot of research is not happening that way. Somebody wants to just publish a paper and run away. I appreciate the research that wants to publish a paper, work with community, get accepted and do that. See their ideas actually used in a real way. I like those kind of groups. I enjoy working with them and I do that a lot. It's fun to talk to the students about that as well. They always usually get good jobs. That's always fun. It's not a quantity. It happens a lot. You have to look at the specifics. Anything else? 300 people. That was a very cute number of questions. Oh, mailing lists is very high volume. Nobody reads the mailing list. All the mailing lists. There's a link to the mailing list. It's a right only medium. We all filter. We all read the subsystem mailing list that we're interested in. I read a number of subsystem mailing lists and they're easy to keep up with. Find the area of the kernel that you're interested in and join that mailing list. We have 50 different mailing lists. 60. Just participate in that. Read other people's work. It's coming by. Something that you are interested in if you care about USB patches. The networking mailing list is quite high volume. I care about BPF. BPF is right on the edge of being able to read everything, but you can keep up with that to see what's happening. So don't rely on digest to subscribe to the subsystem mailing list that you care about. There's also really cool tools with lore.curl.org. We have tools that you can set up a filter and say, give me everything that's signed that's great writes and it'll show up in your inbox. Give me anything that has USB in the subject line or that touches this file in the kernel and it dumps it out of the mailbox format. You read it from there and you can watch that feed. So we have really good ways to filter those feeds and do that. Or you can leave the kernel as a read-only thing and you can go back and look at things in the past. A lot of us do it that way. That's really a good way to do that. But don't think you have to read all external mailing lists. We don't. None of us do that. It doesn't happen. How does a patch rejection rate affect contributions? You tell me. Our contribution rate has been constantly going up for the past 27, 30 years. Every year we're like, there's no way we can possibly go this fast again and we do. These are accepted patches. Again, remember, average at least takes three times to get a patch submitted before it gets accepted. If we were to cut our accepted patch rate into a quarter, we still would be going faster than anybody else. It's still a huge number of what we're doing. So patch rejection, you want patches that are buggy or that have problems to be rejected. Why do you not want that? The developer review process of the kernel is hard. It's difficult, but it's also really good. A core contributor a long time ago said we first started off. It's the scariest thing he ever did was contributing to the kernel because now it's in your name. And when things are in your name, you actually take more care, you take more pride in it because it's going to be on there. He's like, this is going to be on my permanent record for the rest of my life. And it's true, but it's okay to make mistakes. It's okay to do this type of stuff. Learn from it and contribute. My first contribution was like, here's a driver. And they're like, oh, this is wrong. This is wrong. This is wrong. Have you ever heard of this thing called multi process here? It's like, what? So yes, it's really good. It's a great review process, peer review works really, really well. On average, when you take a vendor driver and get it submitted to the kernel and finally accepted, it's one third the size. We have some very documented drivers that were added eventually when they went through the review process, one third the size means one third of the possible number of bugs. And a lot of these have increased functionality with decreased code size. Development process works so far. We have not had a problem with hatch rejection. Yes, don't take it personally. We're just reviewing your code. No way should ever complain about you as a developer. We're just critiquing code. That's what you're supposed to do. It's really good to learn. Start reading the mail list and read other people's code. You learn how to read music before you write music, right? You need to learn how to read code before you write code. Do that, review other people's stuff, see what problems are happening, and then that's the best way to learn a good way to do that. Do I think there's a need for more kernel developers? I don't think that there's a need for less. But you told me. There's a company, somebody's worked for a long time ago, every year jokingly asked me, hey, is the kernel finished yet? Why aren't you done? And then I never had a good answer. And so finally, I realized that it's done when you stop making new hardware. And that's as simple as that. They stop making new hardware, stop having new use cases, stop needing a general operating system to control all these different things. The kernel is just a tool. It's a hammer to make somebody else solve the problems that they need, then we finish. And that's not going to happen because the world constantly changes. So there's always going to be more developers. We'll always gladly accept developers. It's a wonderful thing. Contribute to the kernel, get a job doing this stuff, work for companies to do this stuff. It's a really, really good thing for you. So yes, always be more current developers. I think Shua mentioned that already. Okay, do I think a test-driven development building is kernel? We have test-driven development building is kernel. You can write your tests, submit them. We have unit tests in there. Wonderful thing. Please do it if you want to do it. Now, of course, you're going to do it. You can do it today. So if you like that development model, wonderful. Let's do it. It's a little hard for some types of things. Some types of things, it's impossible. Some types of things work really, really well. A number of subsystems required is that simple. So it depends on the area of the kernel that you're working on. Yes, that's good. How many, MSA? No known security holes have been added by anybody, but there's been lots of unknown security holes added by regular developers. Again, as I said, I've ridden security holes. It's hard to know malicious intent. We do not have any examples of known malicious attempt to get the whole into the kernel. Simple as that. I will document that. That's as simple as that. Is there a deadline for the equipment here? How could it be introduced? There is no deadline. We take drivers and code for hardware that isn't public. Infamously, we ripped out code for a whole processor from Intel and we're like, wait, wait, wait. Somebody must have this hardware. We can't delete this code. Intel said no. That chip never shipped. So Intel so far has reviewed, developed, reviewed, accepted, merged, released before they even had a chip that came out of their validation area. So you can have stuff merged today for stuff that isn't public yet. Fine. Other companies rely on a public announcement before they start sending patches that then they're constantly behind ball. Qualcomm is known for this. They create a new device. Announced it. Let me start sending patches and three years later they're all merged. But in meantime, they've already created a new couple of new devices that are always behind the wall. That's there. They want to do that. That's up to them. Intel does it better. IBM does it really well. I think they were putting Power 5 on the maintenance kernel with like four changes once in the brand new CPU architecture because it all just worked in the validation. So how quickly can drivers, again, depend on the subsystem and depend on the hardware? But we'll take them today. We have no requirement for that. Yes, the US government wants to understand the most important software on earth, of course. So talk to us. We're not hidden. We're actually, we all work in public. People write research subjects on this. We documented how we do, how the kernel security team works. We got documented how the model works. We document all this type of stuff, right? Provide another report. I hope they talk to me because I do consider this work. I don't think it is not going to stop whether people are allowed to contribute or not at all because even if you look at the last time the US government forbid certain countries or entities from contributing to companies or contributing or being bought by it turns out open source is not forbidden because it's all done in public, not behind any contractual obligations, not behind any monetary things, whatnot. You cannot legislate around that. So that's actually, it turns out the open source works better for everybody in the world because of that, because that not any one random rogue country can try and stop it that way. It just can't happen. So it turns out that our government model actually routes around governments because of that. Yeah, and we're not going to validate who, I mean, we know everybody knows who contributes. Harsh environment. Why spread or isolate situations? I think a number of people in the past had a bad view of a harsh environment that hopefully has been since we changed the last code of conduct, which is four years ago. We have not had any reports of that. The code of conduct committee does a report every six months or so saying there's been any issues or not so far. There's been no issues really. So I can do that. I always find the harsh comments interesting. I've worked for companies where people throw chairs and they have meetings for teach how people argue. It's a very inviting, welcoming community. We want everybody to contribute. We realize that everybody from all groups should contribute and makes the whole project better. So we're very welcoming and very open and very honest to that. Nobody should ever feel upset about contributing the way share would be yelled at. If they are, talk to us. We've got ways to handle this type of stuff. So I hope that the old idea of a harsh community is long gone. We don't have any recent issues or anything that I know of as far as how that goes. So, please. I hope that's not a shame. Anything else? Rust in the kernel. Yeah, it's nice. I have my Rust books back there. Maybe we'll get there. The funny thing about Rust is like, oh, we'll just write a driver in Rust and that'll be easy. People don't realize that drivers actually are a consumer of war in kernel like guys and anything else. It's the very tip of the tree. We rely on everything for a driver. Memory management and process communication, reference counting, driver model, all that jazz. Write a driver means you have to have books for everything. The last patch that just got sent out looks really good. We're going to talk about it again at the Plumber's conference in a few weeks and a kernel summit in a few weeks. I think it's a nice idea. Let's try it. You see, my knowledge of Rust is there that. So, I need to ramp up on that if I'm going to try and review patches and that. But sure, why not try it? The fun thing is the majority of our bugs are not issues that Rust would help with. It would help for out-of-tree modules. I don't treat drivers, but our development process and our review process and our testing process usually catch the most obvious issues that the Rust model also catches. It's not going to catch any logic issues because logic issues are possible to get rid of bugs in any programming language. So, the number of bugs it's going to prevent over time might be pretty low. Nobody's going to convert old code to this new stuff. I mean, a number of problems that we find are really, really old drivers that nobody uses anymore or like the Sega Dreamcast CD-ROM driver, memory leak on a NIP probe, on a NIP sequence. Those things are not really an issue, but those are fixed up. Yes, Rust would have caught that, but we're not going to go port that old code to Rust. And new code ever submitted would never have that. So, yeah, look at it. Look at the package, run them yourself. See if you think it looks good or not. So, let us know if you want to see this stuff. That's the best thing to do. How competent, when there are only the two of us to back.100 to commit to different versions and different code base, is it enough to trust only the test reports? Why wouldn't I trust test reports? So, remember, the only patches that go into the stable kernel have already been through this whole review, whole development, whole integration, whole test, whole retest, all these test crosses. All, the only patches we take into stable are also passed to all that. So, by virtue of that, it's a known good patch. I trust the reviewers and the maintainers of that sub-system to accept that patch, and it'd be correct. Or I trust that if it is wrong, they'll fix it, which is also most important. We're always going to have bugs. So, yes, I trust that, first off, that is a good patch. Then we integrate it, and we back-court everything, and away we go. We run the tests again. We actually run more tests than we run on Linux's tree to verify that everything still works. Sometimes we will find bugs, that the original testing, the original bug that takes them to Linux's tree, they were found, which is great. The shows the process is working. That's why more people are now testing Linux's tree and getting verification. But yes, I rely on tests. Why wouldn't we rely on the testing procedure? The interesting thing is we test these as a unit. We test all these patches together. We don't test them. We don't cherry-pick them out. You cannot just cherry-pick random patches out of the stable tree into your device and expect to have a secure and bug-free device. I have done reports on, I will call out, a number of hardware vendors that did Android kernels, that did ship devices, and they said, oh, no, we just pick out the patches that we know we need, and they missed tons and tons of CVEs. The best one was they missed a bug. They missed a commit in the stable tree. They said, this is a bug. Here's the reproducer. Here's how we're fixing this. In the change log text, their tools missed that. So just take everything. Also, we test everything as a unit. So look at the larger, complex bug fixes we've had with rep-leads. Spectrum meltdown is not just a cherry-pick. It's about 20 patches. It's about 60 patches. It's about more patches than next release. It's about more patches than next release after that, because we find other corner cases. I mean, rep-lead was a number of, what, 80, 90 patches, and then the next few more days, we found 10 more patches, and then I just saw a patch flow by today to mix another corner case of this type of stuff. Now, we can't really test when we do have these zero-day issues, and that's another problem that we're trying to work out and come to grips with, because we can't test in public issues that are private, but it shows that the process works and you can't just share it. So yes, I feel good about the tests that happen. And also, you should test. You should never trust a one-line patch that you shouldn't run through your whole test system anyway. So the best verification and trust is if you run it under your use case and you feel that it works for you, that's the best thing to do. So test it yourself. So if I may add one thing. Yes, please. So sometimes the one-line fixes are the ones that could really mess things up. So you want to test those, right? Yeah, that's very true. Yeah, you're right. There's a lot of one-line fixes that are just flat out wrong. And the other thing is that the trust we put in our code that we develop is that I am running 5.9 right now, 19 right now, 5.19 latest. And I would have gone to 6.0 RC1 and I would as soon as it comes up. So I am actually doing this webinar on the latest bits. So that's how much I trust all of my systems I use. They always run the latest RC. So this one, this is running on 5.9.19 as well, plus the USB development stack. The stuff that's in Linux history. So it's running a merge that is even crazier than that. Yeah. And my test, my build system that I run is usually runs the latest, always the latest stuff as well, because I want to verify that my build system can still run. So yeah, run Linux history, plus your own patches. That's the best thing to do. What about the best factor, Linus? Ask him about that. He famously says he's not going to be ran and doesn't care. That being said, the core kernel developers, we have talked about this, we can handle this, we have a process in place. Don't worry about it. We know what to do. It's not going to be an issue. There isn't a best factor today. A number of people have right access to Linus's tree today. So it's not a number of people have right access to my tree as well. So we all share maintainership of subsystems and major portions of the kernel among each other. So it's not going to be an issue. Don't worry about it. Any activities using AI and ML? Yes. Actually, Sasha, the other stable kernel maintainer famously is using this. He runs all the kernel patches through a model that says, based on and trains it with kernel fixes. So these are fixes that stable or that maintainers say are a fix. And then he compares this other new patch with what his current one was. Or it does that match or not. And he famously has trained this model. He, with some researchers in academia who wrote a bunch of good papers on this stuff, they went down some wrong paths, but it's working pretty well now. Again, academic works with us as well. He trains it. So he sends out these patches. He'll say auto select. And these are patches that his tool has found that he thinks should be actually backwarded because a number of subsystems in the kernel do not tag fixes because this for various reasons. So those are like very good. But even I've had patched slow line that I forgot to tag is the stable fix now. Yeah, that was. And this tool fixed it up and it works really well. So yes, we have machine learning working on the kernel for the past five years that way, making yourself have a much more stable kernel. So yes, that's it for testing. If you look at the work that fuzzers do, you can look at that this pattern matching. I mean, all machine learning is pattern matching. This is spot work. It's fuzzing the kernel like crazy. That's a whole bunch of machine and throwing a lot of machine processing power, fuzzing patterns, seeing how things work. And we find all the books of this layer, it drops down on a layer, we find the bugs of that layer, keep on going. That's been going on for a long time. If you want to get involved in kernel development, and you've moved on beyond fixing up a coding style, if you're changing the spelling of a word. So now that you know how to develop, you know how the process works, look at the SysBot list of bugs. We have hundreds of bugs out there that are reported. Here's a reproducer, run this, fix the problem. They're there and they're doing a great job of finding these issues. I wish they did a better job and providing the resources to fix them. That being said, we have the mentor and the interim project to help provide resources to fix this stuff. She was running a bunch of interns this year, this round to help fix these problems. I've done it in the past as well. She's doing a wonderful job with that and how they recommend that. But if you want to get involved, please do it. So yes, there's your AI and ML. Again, AI and ML is just statistics. It's nothing fancy. Statistics at a large level. I have to probably make some people mad. That's okay. Cool. Yes, thanks for linking this. And the fun thing is we have seen security bugs reported to us this happened this week. We're like, oh, here's the security bug and it's there in the kernel. And we dug and like, oh yeah, SysBot reported that a year ago. So people need to pay more attention to that stuff. And the fun thing is when SysBot has reported something publicly, it makes it easier for us to fix because then we can go look at the bug report back then. We have a reproducer. We can work on the problem in public, fix it, push it out to everybody and work even faster than if you're going to have to try and do it in private. So we're really happy about that. Let's look at some more time. Bug bounty problems for the thanks kernel. The kernel does not run that. We don't offer that. That being said, there are a number of companies that do. I will call out Google is known for paying a very large amount of money to people who find bugs. Google works with us and they want us to fix the problems before. So if you find a bug in the kernel, it's deemed a security problem. We fix it in the kernel. You can then report to Google and they'll pay you. Works out great. Famously, a number of months ago, there was a bunch of USB gadget bugs that were fixed. I think I just saw that that person got paid an awful lot of money for fixing those or for finding those and helping fix those bugs. So sometimes it's good. Some people want that and I'm glad they're offering that to us. Google is very nice in that they also do a lot of triage with that type of stuff. So it's good that nobody has to hoard patches or hoard vulnerabilities and you can get paid by companies like Google. There's a few other companies that also pay for bugs and then they'll be reported to us, which is fine. But as a kernel community, we don't do that. That's up to you. We don't have any money. We're not a corporation. If you want to get paid to fix bugs, yeah, just work with Google. You'll get paid some good stuff. Some people make up some good money and then usually Google just hired you. I think they actually have a number of opening in their security team right now because of that. Sometimes new features are accidentally backward and stable because it depends and it changes the behavior. We don't want them stable. Any of these, how substance to maintainers can prevent that, monitoring a stable mailing list. One thing, if you don't want any patches for your subsystem backward, to stable except the ones you explicitly mark for stable, let us know. There's a number of subsystems that I've said, we are going to take care of this. We will be the ones sending you patches. We will be the ones tagging it for you. Do not run them through the audit select. Do not run them through anybody else. We're going to do this and I'll work with you on that. And that's perfect. I will call out XFS as a file system that is finally getting involved and doing it. But before that they send them patches for stable. They're doing a good job. They're testing them. They run through the framework. The KVM maintainer does a really, really good job of tagging the right patches for stable and says anything that isn't tagged for stable probably shouldn't be going. And then sometimes we question a few. They'll agree with them. So we have a little bit of a manual process where we propose, hey, should these have gone as well? And then they'll say yes, no, yes, no. The memory management team, also if it isn't tagged for stable and we just don't accept it unless it goes through those maintainers. So if you're a maintainer with subsystem that you don't want anything but the stuff that you're going through, talk to me. Talk to Shoste on the mailing list and we'll be glad to mark those. We have tools that when we scan the patches that we don't even, we just exclude those subsystems as well. But being said, some things backport that do change behavior, which is odd because you just changed behavior in Linux history as well. So why wouldn't you want that change behavior in older kernels now, whether we just didn't take all the patches or it changed in a way that we didn't want to? Great. We'll work with you. We all make mistakes. We all fix up bugs. Great. We'll just take that and move on. That happens very, very, very early. But you don't have to monitor the stable mailing list. We copy anybody involved on the patch when it goes to the stable team. So if you're involved in the patch from the beginning, you're signed off, you're reviewed by your testifier, whatever's on that, you'll get copied again. So if you happen to see a patch go by and you're like, no, no, no, no, no, that shouldn't go in there. You're automatically notified. So you don't have to subtract the mailing list. We notify you. And we all copy you. I copy you. People complain I send too much email, but it's there. You are notified when patches are added, when patches are after review, what branch they're added to, what branch they're being merged to. You get lots of email about this type of stuff. So you know what's going on. You don't have to launch the mailing list. So yeah. So if you don't want your system, how many patches? Talk to me and we'll work through it and figure out a way to do. You automatically do security testing with fuzzing tools. So Sysbot does a lot of testing. Again, a bug in the kernel as a bug as a bug, whether that's a security issue or not, is not for muted side. People done some really cool things and shown that like a one bike overflow was a security issue through this long arduous chain, but we fixed it up and moved it on anyway. That's our goal, fix it, the bug and move on. So we do fuzzing tools. Yeah, Sysbot's a huge wonderful fuzzing tool. There was another fuzzing tool that we used to read all the time from Dave Jones. I can't remember the name of it. It still runs as well. I'm coconut. We have tons of static analysis tools. I run on the tree. Even better, the paid covariate, covariate runs on the tree. You can see all those bug reports. So we have lots of tools that run to find out security issues or find out bugs. So yes, we do a lot of them. That's the Trinity, I guess. And that's still running. I still see reports for everyone. So Dave Jones is a really good job. He's still around, which is cool. Kernel books are old. Do I have any plans to write any more new books? Excuse me, no. Riley, the last driver book or the third edition contributed to it. Riley said no, we don't want to do books anymore. So no, they don't want that to happen. They won't give us copyright back. That being said, there are a number of good books. You look at the Riley books for the Kernel, Riley books for the drivers to get the ideas and to get the idea and then look at the code from the ideas to see how the implementation works today. They still hold up well. I know the driver book is still used in universities for teaching the basics of writing drivers. The member managed book is still as well. Robert Love has a good book. That's still the basics of how Kernel works is really well. But even roll back even further, look at Ten and Bounds book on Minnix, which is what I learned from and what Lena's learned from, they often do at the same time. That book is written at the University of out on the street here. That's a really good book and operating system design and implementation and those ideas and those work from there still pertain today. You'll see the old way of doing things with a POSIX model and the Old Unix Miles and the Kernel, but we've also moved way beyond that for a number of things. We do high speed IOT, we do all these other fancy interfaces. But yeah, so look at the old books. They work really well. The networking books from Stevens work really good well for networking basics. And then look at how the Kernel handles things today because we've evolved beyond those. Keeping that up to date with the book is probably not worth the time. You don't make any money writing a book. There's still a lot of effort and work. That being said, people have kept, I know, the Weiss driver book examples. Professor, I think the University of Florida has keep those up to date on GitHub. So they do work and they'll follow the screens. So you can do that. So no more plans for any books from now. I still have to read these Rust books and get through those. Cool. So we do have... The stress in G is kind of interesting. That really just stresses your hardware more than anything else. Right. What I found. I don't think it stresses the Kernel. Yeah, it doesn't stress, you're right. It doesn't stress the Kernel as much. They do have like a few new features, MM testing that they seem to do. Yeah, some things are good. Something weird, I mean, if you talked to the file system people, they were like, yeah, we have to run these tests for 10,000 hours in order to make sure they work, which seems very non-deterministic to me. It's more like a... The file systems interactions with tests are very odd too. Absolutely. It's more like a show test, meaning continuous hours of operation type test. You have to wait a long time to make sure that it works. But yes. So that's another thing. And then if you're looking for Rust resources, this webinar mentorship series already has three of them and we are planning two more. So check it out. Wetson and Miguel, Rust developers, they came to me and they wanted to host, volunteer their time to put webinars. And yeah, you have two more coming up. There's already three on there on this mentorship. Cool thing is what they also did is they made a GPIO driver in C and GPIO drive in Rust. Right next to each other, you can lay it down how this works and that's a great, great learning tool. And that helped me out a lot as somebody who knows C. And I think they went through that on the Dice driver talk, right? Yeah. They designed the three-part series that way to go through and show you how to write a new driver and then also be able to compare what is in the kernel and then what can be done, how you do that same thing in the Rust. So those, if you are interested in Rust, check those out. There's already three up there and then two more coming up later on this year. So no more questions. We'll give 15 minutes back to everybody. Thank you for having me. I think I need to sort of back to the Linux foundation. Yes. Thank you so much, Greg. And thank you, Shua, for your time today. And thank you, everyone, for joining us. As a reminder, this recording will be on the Linux Foundation's YouTube page later today and a copy of the presentation slides will be added to the Linux Foundation website. We hope you are able to join us for future mentorship sessions and have a wonderful day.