 Okay, so the final keynote speakers for Open Source Summit Japan this year is Greg Rohatman. I don't think I need a lot of introduction about Greg for this audience, but just in case you don't know him, Greg is the Colonel Maintainer. He maintains the stable corner branch. So if you are doing any business using Linux, you are actually getting benefit from Greg's work. So Greg is talking about Linux Colonel security today. So without further ado, let me bring the Greg over here. So please welcome. Hello. It's good to be back in Japan after many years. I'm also a Colonel Developer. I'm also a member of the Linux Colonel security team. So we've had lots of talks and lots of issues with security being more and more popular. So I thought I'd discuss it and try and see what's going on. There we go. So why talk about this now? A really big thing, I live in Europe and in Netherlands and our utility grid is under a lot of pressure. But so is the laws. The EU recognizes that open source and software run the world and software is not regulated. Paint is regulated, plastics are regulated, electronics are regulated. But the software in your devices is not regulated. And so they set about making a rule and laws to help regulate this. And this is a good thing. We want this. But in the creation of these laws, it's interesting to try and explain how software is created using open source with politicians. It's a big mismatch because we're not companies, we're individuals, we're companies contributing. But you can't go back and try and hold liable individual developers working in other countries and working for random projects that happen to ship a device, or software ends up in a device that could ship to the EU market. So the CRA finally was approved and finalized a few days ago. And it looks like we got a lot of good changes in there. But it's going to start hitting everybody. It's going to hit Japanese market as well. All the world sells their devices and products into the EU. But it's going to define the security requirements. And so the US government is also having security requirements. And so people are coming to the Linux kernel security team and saying, how do you meet these requirements and what's going on? These companies, they all say they want to join the security team. Let's talk about this a little more. There is no corporate members on our security team. And then, Lenly, governments and other companies want to know the security announcements. They want to know they rely on Linux. They want to know ahead of time what's going on, when there's problems, no upgrade. And all of us know Spectrum meltdown. Hardware is buggy. The job of software is to fix hardware's problems. Hardware and bargos are hard. They're difficult, have to deal with hardware supply change, development cycles that are much longer than software. And we have to do these. They are not ending. The Spectrum meltdown issues are with us for a while. They keep trying to fix them in hardware. They're not working. Another one just got announced a few hours ago. So this is not an end to this anytime soon. So let's talk about software. This has changed. A lot of people don't realize this. The Linux commercial distribution model is not dead, but it is not the majority anymore, by far. 80% of the world's servers and systems run free and open source projects based on Debian, based on Fedora, based on OpenSUSA. The majority of the servers in the world are not running commercial distributions. Because of that, intercompany interactions is found with Spectrum meltdown, like when you get the distributions talking to each other or talking to Intel, talking to AMD, just doesn't work. Because where's Debian? Where's the other open source projects? And all of this, all the Linux systems in the world and servers are a rounding error compared to Android. Android, it really is in the billions. Servers are in the hundreds of millions, probably less than that. Just doesn't work. Android doesn't open source project. Doesn't really just consumes the kernel, doesn't work well. Android is also spread across a huge number of companies. And also, the community that works with these open source projects can't sign NDAs, because they're community members. They live in other countries, they work for different companies, just doesn't really work. So the old model, companies talking to companies, getting NDAs required even going through the Linux Foundation just does not work. So let's talk about how the kernel security team works. We're reactive, we're not proactive. There are other groups and other kernel security teams and projects that are proactive. There's a security conference every year. There's the Linux kernel hardening project. There's lots and lots of good stuff going on, but that's not what we do. We just react to problems, and that's good. Somebody has to do this type of stuff. And this all started back in 2005 when somebody on the kernel mailing list said, hey, I want to report a security bug, how do I do this? 2005 was an interesting year. 2005 had the security list, had the first stable kernels, had Git, lots of things happened in 2005. We kind of grew up. So they said, hey, we need to have a list. So some of us have been doing this on our own, just in an ad hoc, informal manner, got together. And Chris Wright published this, he's now the CTO of Red Hat. He's gone on to better things than being a kernel developer. But he made, here's the rules, he submitted a patch. And one of the most interesting things in this, and Chris and Linus set this up, is at the end, he said, we don't do NDAs. Linus Kernel's security team is an informal body, and we can't sign a contract. And that actually was the best thing that could ever happen to us. That set the stage for us doing this in a way that it's company and government neutral. And it's saved us so much problems. So kudos to Chris for getting this right. So how do you do this? We have an alias, security at kernel.org. You think you found a security problem? Email us. Is that it? Just an alias, not a mailing list, no archives anywhere. Explodes out to the individual members. It's a small group of us, I think we're 10 or 12 people now. And we don't represent our companies. We can't tell our companies who we work for, what is going on. We have had problems of sometimes members of the list do tell their companies, and then we have to remove them from the list. All kept quiet. In all the years, since 2005, we have never had any leaks. Pretty good relationship, so companies trust us. All we do is triage. We triage the report. We send it to us. We figure out what's wrong. We drag in the proper developers, if they're not on the list already. We work to create the fix as soon as possible, and we get it merged into Linus's tree, and then the stable trees that me and Sasha release. The goal is to get it fixed as soon as possible. That's it. And if you're brought in a whole bunch, we then just add you to the security alias. So it's not exactly a good thing to be added as part of the security team. It kind of means and implies that your subsystem isn't doing so well. We just get tired of having to do the round trip. And every couple of years we go through and purge the list and ask if people still want to be on it and whatnot. People move on, they don't want to do things. It's all fine. And that's it. So again, we fix the issue as soon as possible, and we don't have any embargoes. Once we have a fix, the most will hold on to it in seven days. I think we actually held on to one for maybe ten or fourteen days once on a very special occasion. This is after we have a fix. When we get a report, we start working on it as soon as possible. We try and work on it. We've had some fixes that take over a year. We've had some networking issues that I think we went on 18, 19 months before we fixed it properly. But once we fixed it, we got it merged as soon as possible. So other groups, other open source security projects, set timelines of 90 days, 30 days, works out great. Ours is after we have a fix. And this is going to collide really hard with all the laws that countries are trying to create. They're doing things like say, hey, you have to fix this, or do something within 72 hours, which actually goes against labor law and the EU. So it's kind of interesting to see how those two things are going to interact. But that's it. After seven days, we'll push it out. Normally, we don't hold things. I don't think we've had an embargo in the past year or so of anything. Fix it, move on. And we don't do any kind of announcements. We don't announce anything. We don't say anything was special. Just push it in the looks like a normal bug fix, gets out to the tree, and away we go. And this makes people mad. They want to know what the security fixes are. So back in 2008, when we've been doing this for like three years in an official way, somebody complained. And then Linus wrote back, these slides are going to be online, so you can see the links better. Linus' issue and my issue and many other people on the security team is a bug is a bug is a bug. We get notified of a bug. We fix it. We push it out. We move on. There's nothing special about security fixes. And if we call out security fixes as being special, that also implies that other fixes are not special. So that would mean that you would not take those fixes when you should take them all. Because we don't really know if a fix is a security issue or not at our level. Any bug has the potential of being a security issue at our level of the software stack. Linus goes on and says, things aren't right. People are wrong to think that it's call out security issues is different. It just doesn't work. And in a very good way, what is unclear about this? This has been proven over and over and over. If you look at a number of security fixes, I like calling out a bug that I wrote many years ago. And then I fixed a few years later in a TTY layer. I thought it was just a normal issue. Three years later, it turns out that you could get root on all of RHEL systems because of it. I didn't realize it was a bug when I wrote it. I didn't realize it was a bug a security issue when I fixed it. Nobody realized it at Red Hat for three years as well. If we, us the developers who know the stuff in and out, don't know this, it's proof that any bug at this level could be potential. There's some really good write ups, unlike you do a one byte overflow of an array on how to get root. The people who are doing attacks and chaining attacks and software are really, really good. They do some amazing things. So any bug at our level has the potential of being a security issue. And that's true. People need to realize this. Any bug could be a security issue, not just something we call out, so we don't call out anything. We just say take all the fixes and move on. And this has been proven to work out really, really well. Other people worry like, hey, if I don't want to take this fixed today because it might have a bug in the fix, that's a problem, that's common. Fixing meltdown issues took us, or spectra, took us months. We fix the fixes, and we fix the fixes of the fixes, and we fix them again, and we fix them again, as we do better testing, and just normal software development. But I need to make this a little smaller. The potential of a fix is much less than having a known vulnerable system. Just take all the fixes. Because if you take the fixes and they're buggy, we'll fix them again because we know what's going on. This was called out, and this was illustrated very perfectly in the last spectra issues. About six months ago, we had another issue with spectra. Some company took just the first initial patches and pushed it to their systems, and then a month or two later, they came to us and said, hey, looks like our systems are still vulnerable because things happened and they could report the problems. And we said, did you take all the fixes? And they said, yeah, we took the first ones. And we said, no, no, no, no. You had to take the week after that, and then the ones a week after that, and the ones a week after that, and then you'll be fixed. They did that, and they're like, yeah, it all works. You have to take all the fixes all the time. Because the biggest issue is, kernel is 30 million lines of code. You only use about one and a half, two million lines in the server, three and a half to four million in your phone, about one and a half in your TV. But we don't know what you're using of these things. We don't know what your use case is. Living system and everything, it's in your smart meters, it's in your cars, it's in the satellites, and it's in cow milking machines. It's in stabilizers, mega super yachts. Linux is everywhere. It's in my washing machine. We don't know your use case. We don't know how you're using Linux. We don't know what the security model is, so we don't know what we're doing. We don't know what part of the kernel you're using. We don't know what code you're using, whether you use this file or that file. So just take all the fixes. And the most importantly, we don't want to know. You don't have to tell us. I don't want to have to keep track of it. But you know this stuff. You know what you're using. You know this stuff. So just take all the fixes. The Google Android security team for a number of years documented all known security problems that were found in the Linux kernel and compared it to the stable kernels that we released. Every single one of them, for two years, were fixed weeks, if not months, before they were reported to the world. They have documented proof that taking the stable kernels always works. Your systems will be secure. And because of that, Android now requires this. They require the stable kernels to be updated at a longer time. But we're trying to shorten that, make it a little better. But it's documented proof that we're fixing things before people know. And also with bugs, Ben did a really good article about this stuff. It's hard to determine whether a bug in your environment is actually an issue or not, whether it's just going to crash, whether it is not going to have any problem at all, or it's a security issue. Vulnerability remediation is really, really hard. Ben is a really, really good engineer to do this type of stuff. And if he can't figure out how to do this stuff, nobody can. Just take it all. He has a really good essay on what is a good kernel bug. I really recommend it. It goes into the details of how this all works. So again, the kernel security policy. Fix everything as soon as possible. Get it out to the users as soon as possible. But this doesn't really work for hardware issues. Hardware people think they're special. They're not. They're just slow. And that's going to really, really went into the new laws that are coming into play. I will call out the EU rules of bugs must be fixed in X number of days. I have been told by major hardware vendors, hey, there's this problem in our chips. We're going to fix it in 15 months. I don't want to know that anymore. So we're trying to shorten that a lot. Hardware companies need to get on the ball and make things faster, but they're not there yet. They're working on it. So we call hardware issues our separate, separate thing we do. And we handle this separately. We have an encrypted mailing list. We don't have NDAs, but we actually have cross-company communication. We have cross-operating system communication. We work with ZEM. We work with the BSDs. We work with Windows. We work with other operating systems out there as well for major issues. And we do tolerate embargoes. Like I said, 15 months. We pushed back and said, I don't want to know. Tell me when you're two months out. This is a bone of contention with a lot of us, with me and Linus and other people. It should be shortened to a realistic time. It also really hurts the security researchers. I do a lot of work with the university in Amsterdam. Defines most of the spectra bugs, it turns out. They have had their master's students graduation delayed because hardware companies have delayed the response to the issues that these master's students needed to publish their master's thesis and PhD theses in order to graduate. That's not very good. I think now they finally realize that they will not do that. They will not delay people graduating. And if you look, the same university this morning announced more security bugs that they found in hardware. So we're tolerating the embargoes right now. We're trying to shorten this. And the governments and the lawmakers are really not tolerating this. And they're going to shorten this a lot more. So how this works today, we have a document in the kernel. And as I said, governments want to get involved because they like worrying about hardware issues and things like that. It turns out Linux not only does your phone and everything else, but governments rely on Linux. I'll call it the country of France entirely runs Linux. Netherlands runs Linux. China runs Linux. Japan, your street signs out here all run Linux. Your trains run Linux. Your physical infrastructure is running Linux. Governments rely on it. So they want to get involved with security issues. That gets very tricky, very, very, very easily. But now they realize we're not a formal body. We're individuals that works out much, much better. So again, we don't do any announcements. Take all the fixes. We cannot assign CVEs. CVEs mean nothing for the Linux kernel. And we have no early assignment for any or no early announcements for anything. And yeah, I had a slide about CVEs. I don't know where that went. The early announcement list. I used to get this asked by a company here in Japan a lot. Please add us to the early announcement list. Like, we don't have one. And this is why. I will say publicly that any early notice list that you have should be considered a leak. Because now it's public. You don't know who knows this information. You don't know who within the company knows the information. You don't know what governments know the information. Just treat it as public. So any early notice list is just a leak. Unless your project isn't used, I will call out many open source projects that have early announcement lists. Maybe people don't use them that much. Because the biggest reason is this. Why would your government allow it to happen if it wasn't a leak? Security's tricky. I've talked to more government people over the past couple of years than I ever want to. So again, security happened. We fixed these at least once a week. These are known issues at least once a week. They look like any other bug fix. Maybe it's not known to be an actual security until real years later. And there's no differentiation in bug types. We just, a bug is a bug is a bug. And here's my CVE talk. I gave a whole talk about this at the Kernel Recipes Conference in France temporarily years ago about how CVEs are broken for the Kernel, how they don't work, how the whole infrastructure just is bad. It's a fun talk. Let's just see that. So I'll leave this with one last quote from me. If you're not using the latest stable or long term Kernel, your system is insecure. And I will call out any company that does not take the latest Kernel that it is insecure. Enterprise districts that don't do this. I know of maybe one company that does this right. I'm not going to say who they are because that insinuates all the other companies that are not. Phones that do not take the latest stable Kernels. I have gone to manufacturers that said, I have your latest phone. Here's how you get root. And they run the latest stable Kernel. This is just a fact. Google's documented this. They published a documentation on this a number of years ago. There's nothing that's invalidated that documentation. So take the latest stable Kernels that Sasha and I use. Make the long term Kernel for products. Those are working well too. Go refer to Sasha's talk yesterday about how you want to take the latest long term Kernel. Don't stick to the old ones. Just as good engineering practices, it's well known. That's how you get the stable Kernel. That's how you keep it a secure system. Thank you very much.