 So, my name is Ryan Ware, I work at Intel, and wanted to give a talk, one of the things that I see in the open source community is a lot of times people don't actually have all of the information they need to figure out how to build a product that will withstand a lot of the security vulnerabilities that happen and that come along that affect their product. So, given the title of my talk, I had to include this in my slides. I don't know how many of you have seen UHF, a very old movie. You can tell from how young Michael Richards from Seinfeld looks there. Anyway, you know, I wanted to start by talking about what is a security vulnerability? And no, really, I'm not joking. What is a security vulnerability? A lot of people have some problems, sometimes really understanding what is a security vulnerability and what is not. You might think it's natural, in many cases it is, but people have a problem enunciating it sometimes. So, I have this Venn diagram that I like to show when I'm talking about this. So, this, when you look here, this is your product's design. And by design, I mean things like your architecture, your high level design, your low level design, your requirements, your specifications, any compliance that you need to be compliant with. You know, and these are, you know, this is what you start out with. And then, you know, this is your implementation. So, this is your intended, you know, this is what you actually end up building. And this, and this is things like source code, object code, libraries, executables, your dependencies, your environment. And this is how those two actually intersect. So, here in the middle are, you know, this is the intended product that you want to build. And this is the product that, you know, you're trying to get out the door. This little area over here on the left, these are things that you designed but didn't actually implement. And this over here on this side is something for the moment I'll call extra functionality. And I'll go over some of that here in a minute. But when you look at this, things in the middle here, things that are security vulnerabilities here in this section are things like you designed your product incorrectly in some way. So, for example, you know, you designed your product to, sorry, screen just went out here. You designed your product to negotiate your cryptographic codes in an incorrect manner. Or you designed your access controls in an incorrect way. These kind of vulnerabilities happen, but they're not the majority of vulnerabilities that happen. Over here on the left, designed but not implemented, you can actually have security issues from here as well with things that you actually drop out of your product that actually have security implications. So, for example, turning off a security feature that you actually intended to ship in the product and now your product is insecure by default. And then over here on the right, the extra functionality, these are things like you hear all the time and this is where the vast majority of security problems come from. So, stack overflows, heap overflows, pointer dereference, string format errors, input validation problems, return-oriented programming. These are all extra functionality that you never designed to be a part of your product, but actually get shipped with your product whether you want it to or not. And you need to keep that in mind as you ship your product out to consumers. You know, these extra bits of functionality creep in whether you want them to or not. Now, I get this question all the time. Is it secure? When you look at this, so you get asked this question and you look at this. Think of this dot. This dot is a bit. Is it secure a binary question? So, you could treat it like a binary question. The problem if you do this is there's only one way to make your system completely secure. These guys are making their system secure. Okay? There's only one way to do it. It's actually, you know, destroy it, don't ever turn it on, make it so it never functions. There is no way at all to have something be completely secure. It just can't happen. So, you know, is this the right question? No. Is it secure? The answer is always no. So, when you get this question from somebody, and actually it was interesting because Linus and the Q&A this morning kind of brought this up, people talking about questions like this, if somebody comes to you and says, is it secure? You know that they don't know what they're talking about. So, you know, there's a number of questions that are like this. Is it compromised? So, this seems like it could be a binary question. Yes, it's compromised. No, it's not. This is actually the wrong way to look at this. It's really a yes or maybe kind of thing. So, I want you all to take a second and think of this. So, you right now some folks are on their computers right now, like this gentleman in the front row, and that's okay. I'm not criticizing. People have their cell phones in their pockets. People are on their devices all here at this conference. And later on this week, you're going to wrap up and go home and go back to work and go back to your offices. Once you're there, how do you know you were not compromised while you were here? You're using your devices. There's hackers in the world. How do you know that you weren't compromised? And you know what? It's almost impossible to say, hey, I know for sure my device wasn't compromised. Devices are just too complex these days. It is impossible to tell that some guy like this, actually without the mask probably, one of you, I know is from this picture, you know, your device very well can be compromised. You can definitely tell yes, this device is compromised because it has whatever virus on it or whatever malware, but it's not a straightforward question. This is also a danger question whenever somebody is coming to you to talk about security. Now, is it vulnerable? This also seems like a binary question. Guess what? It's not. There's one answer to this. It is, you're absolutely vulnerable. You are never going to ship a product that has no vulnerabilities in it at all whatsoever. Actually, so if you went back in time to the end of 2015, and you decided, hey, I'm going to ship a product, and you ship a product at the end of 2015. And it's based on Linux, so you have things in it like the Linux kernel and Open SSL and LibTip and FFmpeg. Despite the fact that you knew when you shipped it of no vulnerabilities in that software stack, these are all vulnerabilities that happen in those pieces of software over 2016. These are all vulnerabilities that you would have to address as part of your product. Now, by the way, it's a logarithmic scale because it was just too jumbled, but if you look at this, for just these four pieces of software, there were 294 total CVEs that were published against those particular packages. So if you have a product out in the field, you are going to have to, at least on a monthly cadence, go address these security vulnerabilities. And you need to keep that in mind when you're planning how you're going to respond to security vulnerabilities in your product out in the field. Now, you may say, hey, you know, okay, so I have something vulnerable in my product. How quickly can it really be exploited? And there's actually been a very interesting example that has happened here recently. And, you know, to be honest, this one's a little bit of an outlier, but it is something that I think is also very illustrative of the problem. So on January 26th of this year, WordPress released a bug fix. 4.7.2 was the version number. When they released it, one of the things they didn't say at the time was that it was a bug fix to fix a specific security vulnerability in their product. They actually waited for a little while before they announced this. They actually waited until February 1st to say, hey, yeah, by the way, that bug fix, it's got a security vulnerability that it fixes. And one of the reasons why they say they did that was because they wanted people to have time to be able to go upgrade and fix it. And so, on February 3rd, all of a sudden, two days after they announced the fix, you could see WordPress sites out in the wild being attacked against this new vulnerability. And actually, if you look here, one of the nice things about, you know, looking at attacks against WordPress sites, especially with these, which were just defacement attacks, it's very easy to go count how many of these sites there were that were vulnerable. And so, when you look at this, you can see on February 4th, things just started to take off. And all of these various different worms and botnets actually started going to exploit WordPress. And I actually got these numbers down here yesterday. So, you know, they were defaced by things like, say, hacked by Muhammed Imad and hacked by SA3D, hacked 3D. And if you actually go search for these terms in Google right now, actually as of yesterday, that one shows 923,000 defaced websites at this point in time. And, you know, similar numbers here. This shows how quickly something went from zero just at the beginning of this month to today, which still has millions of websites that are still defaced from WordPress instances out there in the wild. Now, this was able to proceed so quickly because, you know, it's WordPress, it's Internet sites, it's all on the network, but you have to remember, too, that most products are connecting on the network as well. So, they're able to, people, attackers are able to go reach out and touch your products. So, who are finding the vulnerabilities in these software packages? So, you know, a lot of folks, they look and they think of the traditional hacker. You know, somebody who lives in their mother's basement, who, you know, lives in the dark and is on their computer all the time, you know, something like this guy, Park Heaven Smith. I'll never stop using this picture. You know, that's the stereotypical hacker right there. It's not really true anymore. You know, to be honest, you know, things have moved on. This was probably true 20, 25, 30 years ago. But, you know, things are different now. Now, hackers, they like to view themselves like this. You know, they think, oh, I'm cooler than Neo. And to be honest, you know, things are actually much more like this. Just guys who are smart, who are working in an interesting field, trying to do what they think is right. And they're actually, you know, people who have grown up to be people like me. I mean, I, at one point in time, was probably more like the stereotypical hacker when I was quite young. Now, one thing I do want to point out is hackers never look like this. And don't go see Black Hat. It's a horrible movie. And so I want to talk a little bit about this kind of ecosystem that's out there of hackers. Here on the horizontal axis is increasing capabilities. So going from script kitty to hobbyist hacker to an expert all the way to a specialist. And here on the vertical is increasing impact of what somebody is trying to do. So curiosity, personal fame, personal gain, national interest. So it's going all the way down here on the lower left-hand side from script kitties who are just trying to vandalize websites. Which, by the way, this is exactly the kind of people that were counted in that WordPress vulnerability that I just pointed to. All the way up to nation states with specialists who do nothing but focus on computer security. Hello NSA. Can you protect against everything in this, in this entire spectrum? Do you think you can build a product to protect yourself from the NSA? I work for a very large company. I don't think we can. But you do have to figure out where you're trying to protect against. And I think where you're really trying to protect against, yes, you need to protect from the script kitties. If you aren't doing that, God help you. You're going to have an interesting time. But this part right here, this is the fastest growing segment of this entire spectrum. And I'll tell you why here in a couple of slides. Excuse me. There's some interesting things that happen with this part of the spectrum as well. So these folks are people who are experts, but who are also looking for personal gain. And that personal gain comes down to money. And it used to be, I don't know, a while ago, probably 10 years ago, say, the kind of money that these folks were making was all due to criminal activity, finding vulnerabilities and selling them to nefarious organizations like Organized Crime or Potentially Governments. But that's changed a little bit here. And again, I'll explain why in a couple of slides. But one of the things that's very interesting is that these guys, they, along with these other experts here, they create all sorts of interesting tools. So, for example, if I go figure out how to hack into a system, I don't want to go do it by hand every time. I want to build tools that actually work for me so I can actually go ahead and go ahead and use the tools to make my day job easier. Now, people will actually end up getting this tool who are more hobbyist hackers who don't actually understand what the tools really do. And then people who are looking for personal gain will go start using these tools. Again, these are people like the script kitties that I mentioned earlier. And it's interesting, these tools, they float around and people start using them. And the authors don't even necessarily really know how they're being used. I was taking a SANS 660 course. This is probably four years ago now. And for those of you who don't know, SANS gives security training and SANS 660 is a five-day course of penetration testing. And we start the second day and I'm there with a bunch of members from my team and all of a sudden I hear him say out loud, and Mark will laugh because this was Dean Pierce. And Dean just goes, hey, wait, this is my tool. So the SANS course has a chapter on how to use the tool that he wrote when he was in college and he had no idea that SANS was actually talking about his tool. He actually thought his tool was dead, but it was like, oh, okay. So these tools float around everywhere and you have no idea where they're going to end up. Now, let's talk about the money aspect here. So it used to be. And this is actually a good example from 2008. 2008 I went to Cansec West and Cansec West is where a new contest called Pone to Own has started in 2008 was the first year. And they gave out $160,000 in prize money. Actually know somebody who could have gotten $15,000 of that, but he said, no, I can sell it. And he did sell it. But that's not really the money that's driving this today. So bug bounty programs. So if you read this here, actually I stole this from Wikipedia because it's a reasonable definition. A bug bounty program is a deal offered by many websites and software developers by which individuals can receive recognition compensation for reporting bugs, especially those pertaining to exploits and vulnerabilities. The first real bug bounty program was actually started by Netscape in 95 a long time ago. It was kind of quiet, but it was something that actually helped them a lot. But in the last few years, bug bounty programs have really, really taken off. There are hundreds of them at this point in time. And there are major players that have them, such as Google, Facebook, Microsoft, Dell, PayPal, Yahoo. Wait, there's something wrong here. Major players. Yahoo. Wait, let's fix that. And these companies are all paying external people to find security vulnerabilities in their software. In addition to any testing and validation that they do internally for their products. So what kind of money can you make by doing this? So a typical one is for the chromium bug bounties. So interestingly, so there's a whole range of $500 up to $100,000. There is a standing $100,000 reward for participants that can compromise a Chromebook or Chromebox with device persistence in guest mode. So basically, you know, if you can compromise a Chromebook as it comes out of the box, you can go score $100,000. You know, minus taxes. And there's a whole range for different kinds of things that you can actually go get paid for. So, you know, low quality bug reports. These are like, hey, I found a bug. I think it's here. You know, you can even get $500 for something that's as rudimentary as that. Baseline bugs, which we'll find, for example, you know, a buffer overflow or something like that. It just exists that it might be possible, given a hope and a prayer to be able to exploit it. You can get $2,000 to $5,000 if it escapes the sandbox or $1,000 to $3,000 for remote code execution. Excuse me. This goes all the way up to $15,000 for a high quality report with a functional exploit showing how they, how you can actually break out of their sandbox. And in actuality, for some people that they found to actually create an exceptional exploit against their software, they've paid a number of times $30,000 to folks, even though they don't talk about it on their site. So, actually, if part of me wants to just, you know, go independent and just go make money from these, because I know how to do this stuff, but I enjoy working it in talk too much, so. But, you know, this is kind of the actual, how much money you can make. You know, what's the, you know, high range of what you can make? There's a company called Zerodium. This is their payout ranges from their website. They go anywhere from $10,000 all the way up to $1.5 million. Okay? So, if you want to become a millionaire today, you find a remote exploit for Apple iOS that jail breaks out of its sandbox. They're paying $1.5 million for that right now. And I can tell you why. They're paying that so that they can sell this to the FBI. So, the FBI can go break an iPhone. But, you know, it's a huge chunk of money. I mean, even so, if you can get a remote exploit on Android, that's $200,000 flash. I mean, you've seen how many issues there are with flash, flash up to $100,000. These are significant chunks of change. And to be honest, there are still nefarious organizations out there who will pay even more than these, depending upon what their goals are. So, okay. Great. Can we get back to the CVE thing here? So, what is a CVE? CVE stands for Common Vulnerabilities and Exposures. It's actually the name of a database. It's a database of, you know, quote unquote, all publicly known software security vulnerabilities starting from 1999. And a lot of times you'll hear security researchers talk about, okay, there's a new CVE for this. Over the years, CVE has become the term used for individual vulnerabilities even though it's just the name of a database. So, it was never intended for that purpose, but that's okay. It's actually run by a nonprofit called MITRE. MITRE manages this database on behalf of the U.S. National Cyber Security Division. They've been doing this being paid by the U.S. government to do this for forever at this point. There's currently 81,785 vulnerabilities in the database as of yesterday. Okay. Currently, and this actually freaks me out a little bit, but currently this year, so far, there's been 1,822. And if you calculate that out, it's a little bit more than 35 new CVEs per day so far this year, which outpaces things dramatically from all previous years. If it keeps up this way, I don't think it will, but if it does keep up this way, it'll be more than 12,000 issues this year alone. And if you look, there's a, you know, long-going trend of here, you know, back in 2006, you know, we kind of peeked and petered out a little bit and then now we peaked again in 2014 and the last couple of years, there's been a bit fewer. And I actually think, then I'll talk about this here in a little bit to why, but I actually think the reason that there's been fewer since 2014 is because the bug bounty programs have taken off. And I'll explain that here in a few slides. So there's also the silent bug fix. This is also something that you need to keep in mind. Yes, the CVE database is great, but many companies do not publish CVEs for internally found security issues. I don't know of a company that publishes CVEs for every security issue they find in their products. It just doesn't happen. And this is one of the reasons why I think the bug bounty programs aren't actually showing the true CVE rate. So if, in general, if you file a bug bounty, it's treated as internal, so you don't generally get a CVE for that if you're paid. And so a lot of things in the past would be CVEs are now being silently fixed by the companies who are paying bug bounties for people to find their bugs. Because they're treating it just like they would their own internal validation and testing. Bug bounty programs don't publish CVEs for found issues. Also, many bugs that may have a security impact, they will get fixed as just traditional functional bugs. And even though it had security implications, it was fixed by somebody who wasn't even thinking that it would be a security issue or that, hey, it might be a security issue, but it's also a functional issue. So they fix it and that does not end up being a CVE either. So you have buffer overflows, for example, that get fixed just because, you know, their code crashes and, okay, well, we need to fix that. So, you know, there are many security bugs that happen that don't end up in the CVE database. Great info, how does this help me? By the way, I just always love this picture of this kid. So you need to think about the survivability of your product, okay? So you must include an update mechanism of some type in your product. If you aren't including a update mechanism, for example, if you're shipping a Internet connected webcam that has a backdoor in it that the Marari worm is exploiting and you don't have an update mechanism in that, which they don't, you can't actually have your customers go fix this issue. You need to be able to update it. You know, if you don't have an update mechanism in there, you're essentially telling your customers, I'm sorry, we don't care about you. And that's really not the message you want to give people who just gave you money. Something else is that you want to make it easy for your customers to update. You know, I look back, as you can tell, I use Apple products for various reasons, which we can go into privately if you want. But I remember back in the days when I had my first iPhone and the way I had to update it was I had to go connect it to my Mac and then get the image from Apple and use iTunes and do all of this rigmarole. It's like, well, screw that. I'm not doing all that. You definitely don't want your customer to think, I don't want to go through all of this process to go just update something that may not affect me at all. I've actually recently, actually just a couple of weeks ago, I got the new Linksys of Valop mesh Wi-Fi. And by the way, if you want to spend the money, it's a great setup. I don't own stock, but, you know, it's a good product. And it actually has an update mechanism in it that is almost completely transparent to the user. So it will, about 2 a.m. every night, it will go check to see if there's updates on the server. If there is, it will automatically download it and install it and do a quick reboot. This system's off for about two minutes. And it all happens transparently behind the scenes. You can go configure it if you want to happen at other times of the day, for example, a night owl who's up all the time. But it's completely transparent to the end user, and it works great. And that's the kind of goal you should have for your product is to have an update mechanism that your customer almost doesn't even know is there. There's many different mechanisms you can actually use if you're looking for an update mechanism. So if you're using an Android-based stack, you can use Android OTA. SWEPD is a very good tool that is used by Clear Linux. Software updates, Mender, OS Tree. You can even use published repos if you're using apt or yam. You have to have something. Now how do you track these? So this is a great website. I really love this site. CVE Details. You can actually get to it at cvedetails.com. It is a different view of the CVE database than what is presented by MITRE. These folks, they actually take the data and actually you can generate graphs and actually have some graphs in here from the site a little bit later. You can generate graphs and do all sorts of things. But one of the nicest things you can do is, for example, if you care about open SSL, you have open SSL in your product, you can actually go and use their little tool to create a RSS feed or embedded vulnerability list widget or a JSON API call URL so that you can actually keep up to date on new vulnerabilities that come out with that software stack. And all you have to do is follow the RSS feed or the JSON API curl. And this will actually allow you to follow all new software vulnerabilities in the package you should care about. It's a great tool. You should go check it out. It's fun even to just go poke around and see different trends that are going on in data. I also want to talk about and I guess, you know, embarrass a little bit Ike, Ike Doherty. He works for Intel in the open source technology center. He works on Clear Linux. He has this tool that he created which is absolutely and utterly awesome called CVE Check Tool. It's used by the Clear Linux folks. It will scan your source for known CVEs. I can't sit here and say it is 100% perfect. We actually found a case last week, week before, where it actually missed a couple CVEs for some reason and we're looking into that. But it's a great tool. We forced Ike to rewrite it from pearl and to see. Thank you, Ike, for doing that. There's also various commercial solutions. You know, I'm not here endorsing any commercial solutions, but, you know, Black Duck Hub is good from what I've heard. So is White Hat Software's open source checker. There's a number of good solutions. I do want to talk about attackable surface area here for a minute. And actually it was funny because Linus brought it up this morning with Dirk. And, you know, the attack surface, as it says here in Wikipedia, of software environment is the sum of the different points or attack vectors where an unauthorized user, the attacker, can try to enter data or extract data from an environment. And you can imagine in the Linux kernel there's a bit of attackable surface area given that it's 14 million lines of code. At the same time, you also have to remember, you know, the vast majority of that code is actually drivers. So, you know, when you build your product, if you're only limiting the drivers to what you need, you're limiting your attackable surface area. But one of the things that you want to make sure you do is you want to ensure that you're only including software in your product that you actually need for your product. And I had an interesting, interesting experience. So a long time ago, probably 2006, I was working on a product that we were actually building out of Gen2. And I like Gen2 for a lot of reasons, and it was great for that particular project, or at least seemingly great because we were able to build from bottom up, including just the things that we wanted in there. And that sounds great, but all of a sudden, one day I was going through the package list of everything that was installed in our product. How many people here know what Frack is? P-H-R-A-C-K. Hey, there's at least one couple out there. So Frack, it's a hacker magazine. And I was looking through the package list, I'm like, why is there a package in here called Frack? Okay. And I was working on a medical device, and some package we were including had a dependency on this other package that included all the copies of Frack magazine up until that point in time. Not something you want on a medical device. So, you know, not that Frack magazine would have been exploitable, but you want to make sure that you're only including things in your product that are actually part of your product because you know why? There's nothing more satisfying than being able to say, oh, there's a CVE. Guess what? I don't have to do anything for this because it's not my product. There's a big bummer of a day conversely if you go, oh, well, we included TeleNet in here for something, and it's exploitable. Even though we aren't using it in our product, we still have to go fix it. That's not something you want to go spend money on in resources. A few other important concepts here, at least privilege. You know, if you have a developer that comes to you and says, but I need to run as root, danger, danger. Nobody, well, almost nobody should ever need to actually run as root. And usually, by the way, this phrase is followed with the phrase of, but I'm special. It's okay for me. You know, software needs to run with minimal privileges. You do not want to have some random piece of software that is running as root and has complete physical, complete access to your physical memory layout. That's not something that you want to have in your product. Also, you want to keep defense in depth in mind. You need to have multiple protections in place. Code reviews. You need to make sure that your developers are doing code reviews. Is there anybody else in here other than me who has written perfect code? People are shaking their heads like, no, I haven't written perfect code. I mean, nobody writes perfect code. And, by the way, when I say this joke, I mean, my wife says, you're not perfect. So, you know, no one writes perfect code. You know, look for code reviews that are, like, submitted and then accepted, like, within minutes of each other. There's usually something wrong there. And you should use static code analysis tools whenever you can. It's a nice, free, easy, automated way to get an extra set of, you know, quote, unquote, eyes on your code. And then, you know, I've actually seen people that actually ship products without actually testing them. Actually do some tests, some validation, make sure that your product actually does what it says it does. And, you know, just some things here real quick, you know, I've talked to you about, you know, what constitutes a security bug versus other bugs and questions that are dangerous signs for those unfamiliar with security, how quickly vulnerabilities can start to be exploited once they're out in the wild. What kinds of people find vulnerabilities in how bug bounty programs kind of play into this whole cycle of feeding on itself. And what CVEs are and how to track them and the various tools and techniques that may help you survive. You know, I appreciate you on this rainy Portland morning being here to listen to me drone on about security. You know, does anybody have any questions? Yes. So the question is, should an outside company or a small company seek outside expertise to help them scale? You know, it depends on who you have internally, but if you do not feel that you have the expertise internally, absolutely you should, because you need to make sure you're protecting yourself when you're shipping your product by covering all of these various issues. And if you don't know how to do it internally, definitely go outside and get the expertise. There's some interesting ways to be able to go find the right people to go help you, but yes, you should definitely look at that. Yes, Alexios. How well does a CVE cover the open source landscape? Actually, it probably covers it better than the more proprietary software, because generally the open source projects are fairly amenable to saying here, yeah, by the way, there's this bug that we're fixing up here. And all of the development happens out in the open. For commercial products, a lot of what happens behind the scenes, and they don't want to talk about it. So actually if you go do the breakdown, it's a significant percentage larger than you would think just naturally from security vulnerability density of open source in the CVE database. Yes. So the question is, is somebody doing work to incorporate kind of CVE checking into Yachto build? And I don't know of anybody specifically doing that, but it would be a great idea if somebody in the Yachto community incorporated a CVE checker into the build process. Yes. So the CVE checker tool that I talked about, IKIS, is already incorporated into Yachto. You just have to turn it on. I knew they were doing it, but I didn't want to bring it up in case it was like, well, it's not there yet. Any other questions? Yes. So the question is, are there plans to put CVE messages in with kernel certificates? And to be honest, I don't know. The kernel has a very interesting dance it has to do. It has a dance of, hey, we're open and everything we does happens in public, but at the same time we don't want to, you know, essentially start talking about a security vulnerability in the kernel, which ends up being a starting pistol for everybody who has a product based on the kernel. So, you know, I don't have a problem giving the kernel a little bit of leeway as long as they're fixing issues. Whether or not they say it's a CVE when they actually do the commit message, I'm not stringent on. So I don't know of any current effort to go make that happen. So the question is, you know, what's my view on the trade off between fixing security issues and potentially increasing platform instability based upon a known delta? And we struggle with that. Okay. My personal view on this is that you should stick with the latest upstream release for your major kernel version that you're using. So, you know, if it's 4.4, you know, make sure that you're on 4.4, whatever X is the latest. And the reason why I say that is because generally the changes that happen in the long term stable releases, they generally don't cause regression issues when it comes to performance in platform stability. Now, I won't sit here and say that never happens. But if you don't do that, you are letting your platform be vulnerable to known issues. Now can you backport those issues to your particular kernel just so you have just that chunk of code in there? Yeah, you can do that. But that starts becoming quite expensive to do as time goes on. Additionally, there may be security issues in your kernel that aren't in future kernels. So you might be exploitable for something that isn't even published as a CVE later on. So it's hard to bite the bullet and just go ahead and keep with the latest version of the kernel. But in the end, I think it's most beneficial in generally the performance issues that people fear don't come up. At least that's my experience. We have interesting discussions with internal partners at Intel on this very topic. So I have one minute left. Any other questions? People are like, no, I want coffee. Cool. Thank you.