 All right, well, good morning, everybody. Good to be here. There we go. This worked. Before it should certainly work this time as well. So my name is John. I'm here to talk about the kernel. I've got a whole lot of stuff to cover in a short period of time, so I'm going to go fast. So let's just get right into it. I'll start with a picture that I put up in a lot of talks. This is a simple table of the major kernel releases that we have done over the last about 13 months. It's pretty boring. It doesn't change a whole lot. You see that we've done six releases over this period of time. Each one contains something on the order of 13,000 or so changes in it, quite a few changes for a 10-week development cycle, contain the work of 1,700 or so developers. We can extend this forward pretty easily to the 4.16 release, probably come out right at the beginning of April. Maybe the worst April Fool's joke ever. We'll never know. But we can predict this pretty readily based on what's happening on there. Like I said, it's boring. I'm not really sure why I put these up anymore. They always look like this, except that the numbers slowly get bigger over the years as the kernel community picks up. It continues to run as a pretty well functioning machine. There is one thing that's a little bit different on this one, though, that stands out, which is that 4.15 took 77 days, 11 weeks to come out. It's the first time we have taken that long to produce a kernel since 3.1, several years ago. And 3.1 was delayed because kernel.org was compromised, and we were trying to recover from that. So one might imagine it took something fairly significant to delay the kernel release this time around. And of course, we all know what that was, which is our good friend's meltdown inspector. These were disclosed to the kernel development community around the beginning of the 4.15 cycle. By the time 4.15 came out, we had the meltdown medications merged, and the specter medications pretty much on their way towards coming in at that time. I'm not going to get into the details of meltdown inspector. There's lots of information out there. If you're really curious, I can recommend a website that has covered them in a fair amount of detail. But I do want to look a little bit about what meltdown inspector and the response to them can teach us, because there is a certain amount of unhappiness in the community about how this stuff was handled and how it played out. And so it's worth looking at in the hopes that the next time around, we can do a little bit better. But the first thing that I would actually like to point out that we learned from all this is the development community really has our back. Over the course of two months, we saw many, many kernel and beyond developers working around the clock, doing without sleep, doing everything they could to protect us against these problems that were not in any way of their own making. They got a little grumpy towards the end of it, because they were having a pretty hard time of it. But if you look at the people who came in, some of them were obviously doing it as part of their job. Others came in just because there was a problem to solve. And we needed to get it solved and contributed to it. And it was an amazing effort. And I think that we owe the people who worked on the meltdown inspector mitigations a pretty big round of applause, honestly. So they did a whole lot of really good work. But it has to be said that there were people who were left out in the cold anyway. We can start with, for example, the BSD communities. There are millions of BSD users out there. Depending on which version of BSD you're using, they either got no notification of all prior to the collapse of the embargo, or their notification was measured on the order of one or two weeks. So when the embargo fell down and these vulnerabilities were disclosed, they really had nothing to tell their users other than, stand by, we're working on this, we'll get there as soon as we can. And in fact, they're still scrambling to try to resolve these things. That's not a good thing to do with the BSD community. I would like if we could do better than that in the future. But even in the Linux world, there were some interesting things that came out of this. The major cloud providers all put out nice releases saying we've protected all of our users from the very beginning, everybody is happy. But not everybody uses the major cloud providers. Some of us are using what are called tier two providers. If I go on to one of the LWN cloud servers and go into this nifty directory that you should know about, where you can see what vulnerability is your particular CPU has and what the responses are, I look and I see, OK, I'm nicely protected against meltdown, right? It has kernel page table isolation installed. This happened about a month after the disclosure on this particular provider, pretty much the case for all the tier two providers, and still completely vulnerable to Spectre. None of the Spectre mitigations have yet gotten into their kernels. That is because the tier two providers were not notified. They were blindsided by it. They learned about it when the rest of us did, when the embargo fell down. This, I think, is kind of worrisome. We have a market already that's pretty well concentrated between a small number of tier one providers. This kind of sharing information is only going to serve to further consolidate that market. And I don't think that's really a good thing for anybody that is involved. I think we need to find a way to get this information out a little bit more widely than we have in the past. And there are a couple of some things that we can do about that. But one thing, another thing I want to point out here, that we learned very well is that the kernel community works as a single engineering organization. We have people working for hundreds of companies working on the kernel. But when they work on the kernel, they're working as kernel developers. And you can see this, perhaps most clearly, in the different response to meltdown and to specter. The kernel page table isolation patches were posted in mid-November, first time around. They then saw a broad influx of participation from the community, people from all over worked on these patches. Improved them, made them much less invasive, made their coverage much better, made them more secure. What was merged into the 4.15 kernel, a little before the embargo, didn't much resemble what was first posted. It was much better than that. Because of this kernel community development process that was applied to them. So in fact, when the embargo fell down, we already had that stuff merged into the mainline kernel. The specter fixes instead were worked on in private. And they were not only worked on in private, but the people who were working on them often even were not able to talk to each other about what they were doing. As a result, when the embargo ended, we had nothing that was set to go into the mainline at that point. We had distributors who had been notified shipping something. We had very different responses from one distributor to the next. And if you look at some of the publicly posted benchmarks, you see that the performance characteristics are very different. The protection characteristics are very different. What did eventually go into the mainline kernel didn't much resemble the patches that were floating around right after the embargo fell. Because again, once the community got a look at them, yes, I didn't like them very much and made them a whole lot better. Had there been some sort of a way for the community to work, or at least a subset of the community, because you couldn't really do these in the open. But if there had been a way to bring in a proper subset of the community to work on these, we would have had a much better result and a better story when the embargo collapsed. Something that would have helped with that would have been to realize that the kernel development community actually has some pretty well-evolved responses to security issues. We have a mailing list of trusted people who can look at a problem, bring in anybody else who needs to be brought in, get it fixed developed, get it out to the distributors, get it into the stable kernels. I've been told that this process handles about one security issue a week. Most of these we never even know about. They just get fixed and they go into the kernel and life goes on. Not all of us actually agree with the nobody knows about it part, but that is how it works. And the important thing is that we're getting the fixes out there. These processes were not followed from Meltdown Inspector. And this was perhaps because it was a hardware issue, not a software issue. But we had to respond to it as if it were just another software issue. It was something that we had to fix in software. We had to develop patches in pretty much the same way. Had we been able to actually apply our community responses to these problems, I think we would have had better solutions faster and perhaps better distributed solutions as well. So I really hope that the next time this sort of thing comes around, because I think we all agree that there will be a next time. I hope that we can follow our procedures a little bit better and not try to kind of wing it and do things on the fly as it was done this time around. The last thing I was saying this, we think of ourselves as running all this nice free software on open systems and so on. But what we see with vulnerabilities like this is to a great extent our computers are still proprietary black boxes with a lot of proprietary software running underneath what we think of as being the hardware level. We saw this, for example, with the Intel Management Engine issues that came out last year, which were quite surprising to a lot of people. We've seen it again with Meltdown Inspector where there's a bunch of proprietary software doing lots of black magic underneath the scenes. I really would like to see a future where we move a little bit more towards open hardware. Something like some of the open firmware stuff that Imad was talking about before. I think that's the kind of stuff that we'd like to see. But also open hardware design, where we can actually review these designs in public and affect them the way we do with software. It's not a panacea, it's not going to fix any problems outright, just like it hasn't with software. But I think it can only improve the situation quite a bit. All right, enough on that. But while I'm on the topic of security, I'm often a little bit pessimistic about how our community deals with security. So I thought I'd put up some good news for a change, just to be different. Keep you on your toes. Because there are some good things happening. Perhaps the most significant being that work on hardening the kernel is going strong and accelerating. Hardening means putting protections into the kernel so that even in the presence of known vulnerabilities, the kernel itself is difficult or impossible to explain. This kind of work was very difficult to sell to the kernel community for many years. Developers didn't see the need for it. They didn't like the sort of overhead that came with it. But the case for hardening, I think, has now been quite effectively made. And so that work is happening, and it's picking up, and we are all safer as a result of it. We're seeing a lot more testing and fuzzing than we used to, finding problems before they hit our users and getting them fixed. And more fixes getting into the stable trees. And this is important. In the kernel community, we fix incredible numbers of bugs, thousands of bugs, a really embarrassing number of bugs, actually. A lot of these bugs have security implications, but those are often not apparent at the time that the bug is fixed. It just looks like a bug, and it takes some enterprising attacker to come along and figure out a way to exploit it and take advantage of our systems. So the best way to be safe is to get as many of these fixes as you can, because many of them are fixing security problems that we don't actually know about yet. And that means getting those fixes into the stable trees. There's been a real effort to get more of them. There's even a guy out there now, Sasha Levin, who has developed a neural network system, to identify fixes that look like they should be stable fixes and call them out and get them headed towards the stable trees. So if you've been looking at the stable trees, you're seeing a whole lot more of these stable releases coming out with a lot more fixes in them. And this is part of why we're doing that. And I think that, again, that's a good thing in terms of keeping ourselves and our users safe. On the other hand, we still don't have anybody who you could call the chief security officer for the kernel. Nobody whose job it is to keep the kernel secure. Make it secure, keep it that way. This is an area of chronic underinvestment, I think, in our commercial ecosystem. Everybody seems to think it's somebody else's problem. I wish that we could get a little bit more funding in that area to improve our story in that regard. And of course, vulnerabilities abound. We're still very good at adding vulnerabilities. When you're adding 13,000 changes every 10 weeks, some of them are gonna have problems in them. So we have a long way to go, but I think we're heading in the right direction, and that is a good thing. One area where I think we do have a long way to go that's relevant to this community, how many of you out there, I think, have some sort of an embedded system that will never be updated to address the meltdown of spectra problems or many other ones as well. So those of you who have not raised your hands, I think haven't looked closely enough yet. So there are a lot of reasons why this happens. One of them certainly is the simple problem of the huge amount of out-of-tree code that is shipped on these systems. If you've added a bunch of stuff to your kernel and you've backported it to whatever kernel you shipped, the idea of moving to a new kernel and dragging all that code forward is intimidating at the very least. It serves as a sort of ball and chain that prevents these sorts of updates from happening. And we see it come out every now and there was actually a fascinating discussion that just went on about a backport of the meltdown patches to the 4.9 kernel for the ARM architecture. And this led to a kind of an exploding rant from Greg Crow Hartman that's actually worth reading. He was questioning the need to do this big backport at all, saying just move to the 4.14 instead. And then when you realize you can't do that, go and yell at your SOC for forcing you into the nightmare that they conned you into with their three-plus million lines of added codes added to their kernel stream. You're always living on borrowed time, he said, and it looks like that time is finally up. These are pretty strong words, but there really is some truth to this, that we have so many systems that are not getting so many important fixes because they are held back by all these lines of code. And should you think that that three million lines is a bit of an exaggeration? This is a slide I ripped off from Tim some years ago. Should maybe get a new one from you. Just showing the amount of out-of-treat code being shipped with popular mobile chip sets. And you see that we really are talking millions of lines of code. That's a huge anchor holding this sort of stuff back. So we need to do better. There are actually some reasons for optimism in this area, including the work that Google's done to bring Android up to at least a set of well-supported, long-term stable kernels. So that has helped quite a bit. But more encouraging perhaps is the fact that a number of SOC vendors are actually starting to see the light and starting to realize that by carrying all this out-of-treat code, they have not only made life harder and more insecure for their users, but they made life more difficult for themselves because it really is a pain to deal with all this. So a number of them have actually committed, at least in private, to working more closely with the mainline and with upstream and reducing that delta in the kernels that they ship. This is gonna take years to play out, but I think the end result is gonna be a whole lot better for everybody involved. So just to finish this out, if you're working in the embedded area, I think there might be a couple people in this area doing that. Work upstream. Don't drag all this out-of-treat code with you and use mainliners, at least long-term stable kernels to the greatest extent that you can and we'll all be better off for it. Meanwhile, there's a whole lot of new technologies going into the kernel, of course, with all of these changes. I don't have time really to even mention them all, much to talk about them, but there's one thing I wanted to talk about because I think it's gonna affect a lot of us over time. And that's a thing called BPF or the Berkeley Packet Filter because there's a lot happening here. BPF is, at the lowest level, a simple virtual machine that runs in the kernel. It's got a simple processor model, registers various operations that it can form. The idea being that you write a program in this particular assembly language that's understood by this virtual machine, you load it into the kernel and it actually gets run in kernel space by the kernel. This may seem like a radical idea, but it's really just one of many in the kernel now. We have things like the ACPI interpreter, the classic BPF engine, which is still there, still used for a couple of things, go away eventually. And the various firewalling mechanisms in the network, I believe they have five of them at the moment, are all virtual machines as well in various types and forms. So a virtual machine is nothing special or new in the kernel, they are, they abound. But there are some things that are different about the current extended BPF work that is going on. It has been designed from the outset to be easy to just in time compile into native code. So you upload this program, it turns into native code, it runs very quickly. There's an extensive verifier built into it to ensure that any program given to the kernel is safe for the kernel to run, make sure it doesn't access memory, it shouldn't access, doesn't leak memory to user space, doesn't go into infinite loops, lots of things like that. Has the map data structure for communication with the kernel with user space, the ability to call functions in the kernel itself to obtain kernel functionality. And extensive support in the tool chain, you can write programs in C and compile them to BPF with the LLVM compiler. You don't actually have to write in the BPF virtual machine. Then there's a whole Python-based development structure around that for writing programs that have a BPF component. It makes it all really pretty easy. And as a result, BPF is showing up in a lot of places. Think of the express data path, or XDP, is a set of BPF hooks in the networking stack that are designed to allow packet processing decisions to be done very quickly under user customization by the uploading of a BPF program. So this is a response to the performance concerns that have driven a lot of users to user space networking stacks and things like that. And it seems to be working, bringing some of those people back to using the actual in-kernel networking stack, which is a good thing. BP Filter is yet another firewalling module built around BPF. Looks like it has a good chance of eventually taking over pretty much all the firewalling duties within the kernel. Secure Computing uses classic BPF now where work is being done to update that to the extended BPF engine. And there's a huge amount of work that has been done around tracing. And using BPF programs for selection of tracing events, data aggregation, things like that. The result of this, if you go and you look up the BPF compiler collection and the tools associated with it, we now have hundreds of tools for looking inside the kernel, understanding what's going on deep within the kernel, within a running production level kernel that we never had before. We have a level of introspection that we didn't have. If you haven't looked at those tools, you should. There's a lot of interesting stuff happening there. And BPF, I think, is gonna show up in a lot of other places as well. We may be heading towards a situation where when new functionality is added to the kernel, especially if it has highly performance critical characteristics or needs, certain kinds of customization, this functionality may be provided as something that you access by way of a BPF program, not by way of a standard system call. And so you end up having to use these BPF components to get it to work the way you want. We come from a standard UNIX mindset where C is our system programming language. This is a little bit hyperbolic, but we may get to a point where access to do a lot of interesting things with the kernel in the future, we may end up using BPF components for pieces of that. I think this is, I think it's a good thing it's gonna give us a level of performance and flexibility that we haven't had until now. But it is gonna be a change. It's an interesting thing to watch. So we add new stuff to the kernel all the time. The kernel grows and grows and grows. We're not so good at taking stuff out of it for various reasons, but as Arn Bergman just said, the main reason simply is that the stuff that nobody cares about, nobody cares about. And so it's hard for us to identify. But Arn doesn't care about such things. And he decided to look and see if there are maybe a couple of architectures that we could get rid of because they weren't being used anymore. He found eight of them. So the current plans are to remove eight architectures from the kernel in 4.17. That's quite a few. That's something like about a quarter of the number of supported architectures that we have. All of these are pretty much single vendor SOC designs that the vendors decided that it was cheaper just to go with the more common commercial designs, whatever. I looked at the patches this morning. This is the better part of 500,000 lines of code that will go out of the kernel in 4.17. That's a good thing. The history of the kernel is such that only two releases have ever been smaller than their predecessors in terms of lines of code. But I believe 4.17 might be the third one. Although if we have someone come along with another 100,000 lines of GPU register definitions or something, we might yet blow that, but it could happen. While we're on the topic of old stuff, if you look at the documentation, it says that if you're building the kernel, you need a version of the compiler that's at least 3.2. The documentation is lying to you. The doc's maintainer has been sleeping on the job again. The earliest known compiler that anybody's made work on current kernels is 4.1. You have to work pretty hard to do that. I think that we're gonna see in the very near future it will be documented that if you want to build the kernel, you need at least 4.6, and maybe something a little bit newer than that to do the job. So if you're building the kernel with ancient compilers, you're gonna wanna look at moving forward on that. We're finally gonna update that. And I wanna just close out with on the topic of disruptive systems, something that I see happening here. Linux was once a classic disruptive system, as it's defined, this tiny little toy system didn't do a whole lot. It wasn't taken seriously by a whole lot of people, but then it grew and eventually displaced a whole bunch of big, well-established legacy players in the market. And it's now dominant, as we heard before, over much of the industry. Linux is kind of that legacy system now. We've been around for a long time. It's an old and big project. It has a lot of momentum and a lot of inertia. It has large resource requirements that make it somewhat unsuitable for a lot of the smaller deployments that people want to do. It has a large and obstreperous development community that can often make it hard to get changes into the kernel, can make it hard to make the kernel do what you want it to do. You look at the rate of change of the kernel and you say, okay, we move very fast. But I think the way that it best describes this is that our bandwidth is indeed very high, but also our latency can be very high. It can take a long time to get stuff into the kernel. And this bothers some to a lot of people in the industry, especially in this part of the industry, that needs often to move faster than that. And not everybody likes the GPL license, of course, is associated with the kernel. I think that's short-sighted, but people don't ask me on that. So if you look at the program for this conference, you'll see a lot of talks about a system called Zephyr, for example, which is a small, lightweight, permissively licensed system that people are looking at using instead of Linux in a lot of situations. Equally notable in its absence from this particular conference is a system called Fuchsia, which is a small permissively licensed kernel being developed at Google that has been mooted as a replacement for Linux on Android systems, which would be a pretty significant change to our community if this were to happen. There are a lot of other things that are going on as well. We hear a lot about these other systems out there. And, you know, so be it this is free software, this is all good stuff that is happening in a very real way. But I would like to make the point that we have all benefited hugely by having an operating system and a kernel that is, one, not dominated by any one company, and, two, that has a copy-left license. That has helped us to build a huge community where everybody works towards the good of everybody else, and we end up with a system that suits the needs for everybody else. A lot of these other systems that are coming up don't necessarily meet either of these particular criteria here and are often designed to do so. And maybe that's our future, but I think we should be asking ourselves, do we want to move away from this nice situation with our shared copy-left license kernel? Is that the future that we want? And if it's not what we want, what can we do to prevent it? I think the only way that we can prevent that is to continue to work to make Linux so compelling and so widely usable that it's the automatic choice for people to use. If we don't do that, I don't think that we can count on the continued dominance of Linux in much of these areas, but I think that we shouldn't give up on that quite yet. And with that, that's pretty much all the time that I have, and I thank you all very much. Thank you.