 Alright, good morning. Thanks a lot. It's great to be back here in Edinburgh. Thanks, Jim. That was a very nice introduction, perhaps a little too nice, but so be it. Anyway, I got a lot of talk about, so let's get right into it. Here is what the kernel community has done over the last year or so. We've done seven major releases over the course of just over a year, since September of last year, with 4.19 actually appearing not on the 21st, but this morning. So I was off by one, just another off by one error. Anyway, if you look, every one of these releases has the work of about 1700 developers, contains over 13,000 changes in it, comes out in order about nine to ten weeks. It's an impressive rate of change and impressive rate of work, but this is really the same chart I've been putting up for quite a while. It's pretty boring at this point. This is just what we do. It's kind of like putting up a chart of sunrise times or something at this point. So I'm not going to dwell on that, but there is one little thing in this chart that's a little bit different that I want to point out, which is 4.15 took not 10 weeks, but 11 weeks to come out. This is the first time we spent more than 10 weeks to produce a kernel since 3.1 came out almost exactly seven years ago. 3.1 was delayed because of the compromise of kernel.org and all the cleanup that was required as a result of that. So one might think that something similar would have happened for 4.15. And of course, the answer to that is our good friend's meltdown inspector. Now, I'm not going to get into the technical details of all this. If you're interested in that, Greg Crow Hartman is doing a talk, I believe, on Wednesday where he will get into it in much more detail. But I want to talk a little bit about how this whole thing affected our community and how we dealt with it. Because from the point of view of the kernel community, a hardware vulnerability like Meltdown or Spectre looks an awful lot like a software vulnerability. We have to deal with it the same way. We gather together the people who understand the problem domain, give them the information they need and let them fix the problem. We have a nice process for this that goes through and fixes on the order I've been told of about one security vulnerability week. It handles it very well, it works very well. But this process was not followed with Meltdown and Spectre with some results that we saw quite clearly and with some lessons that I think that we all can learn going forward from here. Rather than following this process, what we got was a whole lot of secrecy. These vulnerabilities were found in the early summer of last year. The kernel community was not informed until the end of October, quite a bit later than that. Only a couple of months before the disclosure really. And when we were informed, there was a whole lot of siloing that went on. The people who were told about it were not allowed to talk to each other. They were not allowed to work together to solve the problem in the way that we normally work together in the kernel community as a single engineering organization. So there are consequences from this, including an awful lot of fragmentation out there. Every distributor ships something different when these vulnerabilities were disclosed. And some of the things they shipped were rather different or rather better than some of the others. And I want to get into a little bit more detail because it shows a little bit about how our community works here. If you look at how the Meltdown vulnerability was handled. Meltdown, the fixes for Meltdown, kernel page table isolation were actually developed in public. The patches were put out there at the beginning of November with a kind of unconvincing cover story about why they were there. And people were able to look at them. And a whole lot of developers came in who were not actually privy to the full disclosure, but could see the patches and help to improve them. These patches improved a whole lot. And by the time that Meltdown was disclosed, these patches were in good shape. They were in the main line. They were more or less ready to go. Modulo, a bug here or there, but that's the normal way. And as part of the result of that, the distributors all shipped more or less the same thing in response to Meltdown. These fixes for Spectre instead were developed very much in private by people who were not necessarily able to talk to each other about what they were doing. And so the result of that was that those fixes were not in very good shape at the time the problem was disclosed. Distributors shipped very different things and none of them. None of them shipped anything that resembled what actually got into the mainline kernel at the end. Things had to change quite a bit once people's eyes were on it and we were able to apply our normal sorts of development processes to it. So there's a real difference there about how they were handled and really shows in my mind that what happens when you get the community focusing on a problem. And there are ways of focusing on problems like this that do not require broad disclosure. You can do things still under an embargo sort of situation, but it can be done better than it was done here. So we had many results beyond fragmentation. Quite a bit of developer burnout and frustration. I think that people don't understand still just how hard a small group of people worked to make sure that we were ready for these vulnerabilities once they were disclosed. These include, by the way, our previous speaker who put a lot of effort into it and quite a few others. I think we owe them all a big round of applause for what they did. Those people had our back and it was a good thing, but every now and then it kind of showed as they got frustrated with the bounds that were put on them and simply tired from all the work. And they all took some rather long vacations in January once this sort of settled out. And the final aspect of this I think really needs to be mentioned is a lot of people were left out in the cold because of the way this worked. Some distributors knew about these vulnerabilities. Others like Debian, for example, were only informed a little bit before the disclosure and other distributors were not informed at all and had no story for their users when the disclosure happened. This also happened in the commercial world in that there were some big cloud providers that had very nice reassuring press releases about how all of their customers were already protected. But other providers, such as the one that hosts my sites, were not brought into this and had no story for their customers. And this is not a good thing, right? The free software world offers many things, one of which is equal access to the code. Everybody can work on it on the same basis and build something from it on the same basis. If we now create a world where only the biggest players have access to information like this, this kind of crucial information, then what we're going to end up with is further consolidation than what we see now. And in a much tighter commercial world, I don't think that's really what we want to work for in the free software community. So I hope that we're not going to see more of that in the future. The good news is that a lot of these lessons appear to have been learned. Everything I've been told says that the element terminal fault vulnerabilities, which were disclosed in September, were handled much better. And the developers were much happier with it and things went much more smoothly. So with any luck at all, we will not go through that again, that our community processes will be allowed to proceed. But there is some bad news that's associated with this that we have learned. We like to think of ourselves in the free software community as being in total control of our systems. We have all the code, we can look at it, we run exactly what we want to run. But that code has to run on hardware. And that hardware, as it turns out, is kind of a black box. And we don't really know what's inside it. Now, if you've ever read the works of Herbert Simon, one of the things he pointed out is the way that you learn about what's inside a black box is to make it fail. That will teach you what's inside it. And so we've made it fail. And what we found inside is not necessarily what we might have wanted to find. That what's inside that black box is among other things, some proprietary software that suffers from a lot of the problems of proprietary software. And we find ourselves in the world where the hardware is not quite the solid foundation that we thought it was going to be. And we haven't fully come to terms with that. And what I believe is going to be the ongoing series of vulnerabilities that will come from simply the nature of hardware as a whole. And this is going to be a hard one to fix. We're going to be working on this for a long time. So moving on, stable kernels. I put up that chart of mainline kernel releases at the beginning, but very few of us actually run those. We mostly run something that's been produced by the stable kernel process, which includes a lot of fixes and so on. These are the currently maintained stable kernels out there. Plus, there are some others maintained in particular by people in the Debian project that go back even older. But in the mainstream stable kernels, we have support for kernels going back about four years. An awful lot of stuff goes in there. Some of these kernels have released received about 10,000 fixes since the actual mainline kernel release. That's an awful lot of fixes. That's something that you want to have if you're running these kernels. This process works pretty well. It's getting better, but we're working on course making things work better. One of the areas of effort here is in longer term support. Because the two years that we originally set aside for long term stable kernel support really proves not to be enough for a lot of users. So we now have Greg promising to support 4.4 through 2022 and 4.9 through 2023, which is quite an extension of that support. That's an awful nice bit of support for the community. And then there are really crazy people like the Civil Infrastructure Initiative looking to support kernels for 10 years or longer. They're looking at deployments like in buildings or in cars or in places where the system has to run for a very long time. And you want it supported for all that time. And you have to figure out a way to actually keep things running for that long. Not an easy task to do that. The other thing of course is we always want to get more fixes because fixes are good. The 4.9 stable series has received almost 10,000 fixes, but the mainline since 4.9 has to have over 136,000 changes merged into it. It's pretty likely that some of the other 126,000 changes were fixes that we wanted to have too. And the stable kernels. So identifying all those as hard, we're working on discipline to improve that and have improved it quite a bit. There are even people like Sasha Levin working on a neural network, a machine learning system, to automatically identify patches that look like fixes and to kick them out for review to bring them into the stable kernels. So we'll see more and more fixes going into the stable kernels over time. It doesn't mean that we're releasing bugger kernels. It means we're getting better at finding the fixes. And this is all that's good, but there are some challenges in the stable world as well. One of which we'll be talking about in the maintainer summit this afternoon is regressions. If you have something that's supposed to be a stable kernel, the last thing you want to do is to break it with a bad fix. But that does happen on occasion. And a couple of times it's been fairly serious. This has gotten to a point where some of the distributors are getting a little leery about using the stable kernels as a result of that. And this is not good. This is not something that we want. So, you know, there's a lot of effort going into testing and so on. But this is a challenge that we're going to have to face. We've been talking about it for years. I think we will continue to have to talk about it for a while. Another problem for stable kernels to get back to a previous subject is huge invasive fixes like meltdown and specter. It was quite a challenge to get those fixes into the mainline kernel. To backport them to, say, 4.14 was more work because you had to take into account all those changes that happened in the meantime. By the time you get back to 4.9 or 4.4, it's a huge job and a very invasive change to those kernels that really kind of pushes the boundary of what you might consider to be a stable kernel at that point. And it gets to the point where the older kernels were not all those fixes hadn't fully backported because it is simply too hard to do. So fixes like this, of which there will be more, have caused people to start to question this long-term stable model entirely and ask whether this model is broken, whether the idea that we can support a kernel for many, many years and put all those changes in there and keep nothing constant really except the version number is really something that we can support. In fact, some developers have been quite clear that they believe otherwise on this score. So we cast around for other models. What we usually hear suggested, developers have been suggesting for quite a while, is to stick with the latest long-term stable kernel. If you've got an older system, bring it forward to this newer kernel. Even if you don't want to rev the version number, do that. Because this is the kernel that has all of the fixes and perhaps some useful features, some security hardening, other stuff that you would like to have. It's the best kernel that we in the community know how to make at this given time. And it is the kernel that we tell people that they should be running. This is a hard pill to swallow in a lot of places. People are averse to risk and they're afraid of regressions and we all understand that. But this, I think, is the way we're going to end up eventually is doing this rather than trying to support ancient kernels for a long, long time. But even if we get there that leaves behind one little problem, what do you do if you're going to take some really old hardware and put a modern kernel on it and make sure that all of the hardware associated with that system still works, even though the developers working on the modern kernels haven't had that hardware in hand for a decade. It's hard not to break things in that setting. And we haven't really solved that problem yet. That's something that we're going to have to work on for a while yet. So some challenges in that area. I want to talk about one technical software thing because I think this is an area that people don't quite realize how fundamental the changes are that are going on. And that, of course, is BPF. BPF is an in kernel virtual machine. It's one of many we have in the kernel, but this one is different. It allows a user space process to load a blob of code into the kernel and to run that code in kernel space. Okay, this sounds like kind of a dangerous thing to do. So to address some of that, there are things like a built-in verifier that actually performs a static analysis of this blob of code once you load it into the kernel and tries to ensure that that code is safe to run in the kernel's context. There is also an in kernel just-in-time compiler to convert this code into native code. So even though it's code written for a virtual machine, it runs as native code. It runs quite quickly. And that's important. So BPF is showing up in an awful lot of places. You see it making security policy decisions. The secure computing mechanism has allowed the use of BPF programs for a long time for policy decisions. The out-of-tree landlock security module will expand that quite a bit once that gets into the kernel as well. Protocol implementations. There are a thousand infrared remote controls, like TV controls out there. Each one, of course, has to speak its own protocol just because that's how you have to do these things. So rather than encode all of these protocols in the kernel, you can just load a little BPF program that understands the one protocol that you need and just do that. Instrumentation kernel tracing has used BPF to a great extent for a while. There's an awful lot of interesting work going on there. If you haven't looked into that, I recommend doing so. We see a lot in networking, including packet filtering, which is unsurprising. BPF was, after all, the Berkeley packet filter. But developments like BP filter go beyond that and remove most of the kernel's firewalling code to replace it with a simple mechanism built on BPF, where the particular firewalling rules you need are loaded as a BPF program. And rather than have this whole general-purpose firewalling mechanism, you have a very tight little program that does exactly what you need and no more. It gives you more flexibility, and at the same time, it has the potential to be quite a bit faster than what we have now. The other thing that's of interest here is called the Express Data Path, or XDP, in the networking world. XDP is an attempt to claw back some of the users who moved to user space networking stacks over the years by providing some of the same functionality. So if you're running the Express Data Path, there is really almost no network protocol processing that is happening in the kernel at all. Instead, the kernel is sorting packets into a set of shared memory buffers that are shared with user space. And then the protocol processing, whatever you need to do, is done by a user space program that's able to read these packets directly out of the shared memory buffer without having to go into the kernel at all. So it's moved it out of there, but the sorting of those packets and those buffers is done by, of course, a BPF program. It's a vital piece of this whole picture. So we're gonna see interesting stuff happening around that. And there's more going on with BPF. In short, we're seeing BPF use one to supplement existing kernel functionality to add the ability to do policy decisions or whatever, but also to outright replace some kernel functionality, allow a replacement for that functionality to be loaded from user space and to do things in a different way than the kernel developers ever imagined that it might be done. This allows us to push code into the kernel for both of these two purposes. And that's a significant development, but there's another piece to this. And the other piece to that is a move to push code out of the kernel and into user space. So again, the express data path I just mentioned moves a lot of network processing out of the kernel and in the user space. Secure computing can or will be able to push policy decisions out to a user space program. The ELF modules mechanism, which was recently merged, allows a kernel subsystem to run user space code as a special little module that's contained within the kernel but running in user context for isolation and other such things. And user fault FD is a system call we've had for a while that actually allows the handling of page faults, which is a consummate kernel task to be done in user space. So we're seeing also an effort to move code in the other direction. And this is changing the way that we view our system. The traditional view of any kind of monolithic Unix tank system is with this hard kernel in the middle, this knob in the middle, it's a kernel with a very firm boundary around it and well-defined interfaces between it. With all this effort to push code into the kernel and associate effort to move other code out of the kernel, we're seeing that boundary become porous and the shape of the kernel becomes much more amorphous. It can change depending on what the workload is that is running on a particular time. It can be configured quite differently. And so the way that our systems is going to look as this work progresses is going to change. I think it's going to be quite interesting to see where this takes us. Some people say that we're finally getting towards a microkernel architecture with all this. Others perhaps prefer to use different words. But we're definitely changing some of the fundamental concepts of how the kernel and user space interact and how we build an actual running system out there for a specific task. It's going to be interesting to watch. So the last thing that I have on my agenda here is something that I didn't think I could really pass over given all this happened. And this is the concept of conduct. So I wrote a book once. I've learned from my mistakes. I'm not going to do that again. But when O'Reilly put together a series of Linux books and they decided on the covers, they didn't give us little leopard kittens or they didn't give us baby seals or any sort of cute animal like that, right? We got a Wild West theme. And I've never talked to whoever it was that made this decision. But I can only assume this was done as a very deliberate action because the kernel community of that time was viewed, I think, and rightly so, as a sort of Wild West as a place where the rules, if they applied it all, were rather different rules and where we sort of settled things with duels on the street and so on. So if you think back to the environment that we had back in the good old days when that book was written, right? Think back to this and we had no source code management system and no change tracking, right? We still go back and look, you know, pre-2.4-ish kernels and so on. It's hard to tell where the code came from who contributed to how it got there. No release discipline. If you remember the days when it took us three or four years to get a major kernel released out rather than nine or 10 weeks, you realize that things have changed a little bit over this time. We didn't have our strict anti-regression rule. We didn't have really much of anything in the way of automated testing. In fact, it was explicitly set at times by kernel developers that the reason we keep users around is to test our kernels. So what we had was, in very many ways, something that was not a professional software development environment. And it's amazing that we accomplished as much as we did with these kinds of handicaps, right? This was... It was a different sort of environment. And as part of that, we had no code of contact, no set of rules that describe how it was that we thought we should deal with each other in the kernel community. In fact, instead, we had, if anything, a rule that said people should be able to say just about anything that they want to behave pretty much any way they want to. And there were people who took that to heart. So over the years, we have addressed most of these things, right? We have source code management and change tracking and all this good stuff. The kernel development community has, over the time I've been watching it, which is, she's about 25 years now, has more from the Wild West into a highly professional, highly disciplined development community. It has changed an awful lot. And I think that we would all pretty much agree that these changes have been for the better, that we are doing much better now. And it's the only reason that we can now produce kernels with such a rate of change every nine or 10 weeks and actually produce something that people want to run. So as of about last month, we now have a code of conduct too. We have finally adopted a set of rules that say essentially that we are going to deal with each other with respect, that we're going to try to be nicer to each other, essentially. These rules have created a certain amount of angst in certain areas of our community. The worst of it, in my opinion, comes from outside the kernel community. It is not something that I'm worried about so much, but there are definitely developers within the community who are worried about some of this. They're afraid that we are maybe going to have to start to accept code that is not up to our standards or that we have given control over the community to people outside of it who perhaps do not share our goals within the community or the kernel development is no longer going to be fun. I think that what we're going to find over time as all of this settles out is that these fears are unfounded, that the kernel community remains in control, that we continue to hold to our goal of producing the best kernel that we can and in fact the best kernel that anybody can produce. And then in the end, we want to continue to have fun. And in fact, if we can manage to be a little bit nicer to each other and a little bit more respectful to each other as we develop the kernel, I think it will be even more fun than it has been in the past. That is my hope. That is our goal and I think that is where we're going to get. And with that, I am out of time and I would like to thank you all very much. Thank you. Have a good day.