 So we're going to start off here with an update from case cook on the Kernel self-protection project which Got a kind of formalized start last year at the Kernel summit case gave an overview and presented a whole bunch of information to the core kernel developers and the aim has been to try and bring some of the out-of-tree code and ideas of hardening the kernel into Linux and there's a number of technical and political social Psychological Psychiatric issues to deal with In case it's taken that on so I'll hand it over to him. Cool. Thanks. I Have a bunch of slides some of them are pretty dense So if you want to look at this stuff offline or along with me now, it's at that URL It'll be at the end again, too But yeah, this is the status of the kernel self-protection. So This is quickly this is about Linux kernel obviously and here's our agenda about what we're gonna be covering I'm gonna do some sort of rationale for why we're doing this and then Who's gotten involved what we've done other stuff we're looking at doing and the challenges that we faced a Lot of these things apply to sort of general software ecosystem, but this is very specific to Linux kernel And I'll just blaze ahead So this is like security summit security is a pretty loaded word it covers a lot of areas and For the context of this presentation I'm talking about things beyond access control beyond attack surface reduction beyond bug fixing Beyond protecting user space and even you know, it can be argued. This is even beyond simple kernel integrity This is about self-protection technologies, which is you know proactive Systems in the kernel to stop attacks a lot of that's wrapped up in kernel integrity But this is this is about though those pieces of self-protection of the kernel itself as opposed to things that the kernel to do The kernel implements to protect user space And then why are we doing this? I mean probably I don't need to cover this with most of the people here But there's a ton of stuff using Linux We're over a billion devices on Android. I mean it's it's got a huge penetration and Then I'll I'm going to come back to this point, but the vast majority of these devices are running effectively old software They're out in the world. They're they're running old stuff, and they may or may not receive patches So fixing bugs won't necessarily help those devices You know as we move forward we need devices that are actually protecting themselves a little bit more than they are now Because the lifetime of bugs that are discovered are even longer for devices that are out in the field because they're dependent on vendors actually issuing the bug fixes to the to the Release devices so even if upstream says oh sure we found that bug we fixed it like okay But what kernel version was it fixed in did it end up in a stable release did a vendor backport it did the carrier for the phone? Take that update from the vendor and push it out to phones. There's a very long lifetime Potentially and this is becoming an even greater problem with you know internet of things stuff where Maybe you have your phone, and you've had it for three years, and you're like wow this phone's getting really old And compared to servers that's kind of crazy, but with iot you're like well I installed I installed this thing in my door lock I'm probably just gonna leave it there for the next 15 years We end up with very long device lifetimes And I hear a lot of a lot of sort of blame shifting about where this problem needs to be solved You know upstream developers like well we've done everything we can we have the bug is you know the bug is fixed It's on to someone else to make sure it's rolled out and then the people Who've rolled out the fit you know the vendors are saying well, but the you know We can't get it out to these devices because they're not coming online or you know There's a lots of different things About that so the idea is to build in their protection technologies from the start And when a bug comes along we don't really care So getting into this a little bit more In 2010 John Corbett wanted to answer a similar question about you know, how long are these bugs really in In the upstream kernel What kind of lifetimes are we talking about our bugs in there for a couple months? Are they in there for a couple years and in 2010 he went back and looked through a lot of the the CVE You know bugs that were Associated with the CVE and saw that on average it took About five years for a bug to get fixed so it'd be introduced at some point and Then five years would go by before it was discovered and or fixed Which seemed huge and I thought well we should probably take a look and see if you know We've done such a good job with doing you know static analysis and trying to do all these tools for bug hunting So I looked through the Ubuntu CVE tracker Which has done a lot of the work already and trying to figure out where things were introduced versus where they were fixed so I can actually calculate bug lifetimes and so for From 2011 forward We're still seeing except for really critical stuff. We're still seeing this about five-year lifetime although the stuff that sort of marked as a High a high priority issue is starting to grow and to even longer lifetimes and This this summary is just a bunch of numbers and I didn't like just putting this slide up So I tried to visualize what we're looking at here. So this is 557 bugs that are associated with the CVE Red which you can barely see here Thankfully are the critical bugs and then orange or high priority bugs blue medium and low The most recent kernel is at the top and The start of git history is at the bottom so you can see the lifetime as it stretches from where it was introduced All the way to where it gets fixed Generally a lot of the stuff under medium and low tend to be very specific problems or Theoretical issues that don't really get hit so we can zoom in and look just at critical and high And we've still got giant bug lifetimes Even on critical, you know, this is from before 2011 I'm not sure when 2 6 31 came out, but it wasn't fixed until 3 14 So if you happen to release Software that depended on any of these kernel versions in between there You're running with a critical bug. I sure hope you've patched it That that kind of sucks A question I get a lot is well, okay, isn't this just theoretical, you know No one's actually finding these bugs to begin with so they're not actually, you know, there's no window of opportunity and that's You know demonstrably false people are finding People are finding these bugs Sometimes immediately when they're introduced this is a link to some some folks that found One of the one of the critical bugs and you know boasted that they found it when it was introduced and they used it for like two years before It was fixed and upstream but most attackers most the people that were interested in and protecting ourselves from are not Publicly boasting about the bugs they're finding so we have you know a couple data points about this But it's you know this lifetime this window of opportunity for attack is still large And it's not theoretical because we can actually we've actually seen demonstrations So As I mentioned earlier, we're we're definitely fighting bugs. We're fighting. You're using static checkers using dynamic checkers We're fixing them, but we're also accelerating the pace that we introduce bugs. So we're kind of on level and Another thing that that I try to convince people of is that the bugs exist whether we know about them or not This seems to be a big thing that people For some reason just can't accept mentally, you know like well, I have no open bugs in my bug tracker Everything's fine. It's like yeah, but you know when Go back to to this so from 2 6 32 until You know 3 out 13 everything was fine With a critical bug sitting on your system So the important thing, you know, the important thing to think about is sort of look at where things were introduced and try to gauge How many high priority and critical bugs? exist now in the software you are running today and to accept the fact that we We as a community don't know about it yet, but they're there so it's important for us to Act like there are bugs because they are there. So we have to create systems that expect bugs to be present And that ultimately, you know whack-a-mole is not a solution in the long term. We do want good, you know To have good clean code. So I'm not I'm not saying we need to stop doing all the bug finding we're already doing Last year was the keynote gave a sort of a comparison to the software ecosystem generally a 1960s car industry where cars were designed To run but not to fail. So it was very comfortable while you're going down the road But as soon as you crashed everyone died. So that's not That's not acceptable anymore like we laugh at it now and it's like cars are designed with all these safety features And in a similar fashion Linux kernel needs to deal with attacks in a manner where it actually is expecting them and actually handles Gracefully in some fashion the fact that it's being attacked And this is becoming more and more of an issue because Over time user space is actually becoming more and more difficult to attack There's a lot of access control and other, you know approaches being used in user space and That kernel layer is now one of the largest exposed attack surfaces generally and containers Have put kind of an even bigger target on the kernel because you've got you know How many different user spaces all running on one kernel? So that makes the kernel even more interesting to attack because you can jump around into other things and A thing I have also tried to remind people although not too much because it becomes upsetting is that it's not It's literal that lives depend on Linux and not just in the fact of how how the software is being used but You end up with situations where you know There was I think it was one of the higher critical bugs the few techs kernel bug from a little while ago Was turned into The tool called Tal route that was used to you know, root your Android phones or whatever but when the the blackhead organization Hacking team was exposed and all their tool sets were seen for the world It was noticed that Tal route was sort of reorganized again to be a Weapon for that group so that means if you're a dissident and activist somewhere else in the world and You're getting spied on You know your life is literally at risk because of these bugs and that's that's a bit heavy, but It's kind of true and it's something to keep in the back of your mind about how these things affect people in the real world I like I like this photo. This is a picture of a of a 1959 Belair crashing into a 2009 Chevy Malibu Where the 59 is just being utterly decimated the entire front end and Cabin passenger compartment is destroyed and the Chevy you can sort of see the person or the test dummy in there But the whole cabin is okay So it's like this is what we want to get to in our comparison from you know, 60s current industry to current and Current software ecosystems and Linux in general. We want to make it much more survivable Anyway, so as I've talked about killing bugs is nice But there is some truth in the complaint that well security bugs are just normal bugs. It's like yeah Some things are you know a bug security bug to me, but not to you But that's even more of a reason to have proactive systems in place because then we don't care what the classification is There it's not a security issue at all and in dealing with how to Accept responsibility for code that is running on devices that isn't in upstream if we can create proactive systems that work even with Unupstreamed software that's good because that's still the device someone is holding is this you know This consolidation of code that's upstream and not upstream and if we can make it safer Regardless of where the code comes from that's good Since we can't necessarily fix the bugs in out of stream out of tree code So again, it's we can't say it's not our problem But I want to kill bug classes If we can stop an entire class of bug from happening that would be best Make it so that out of tree code can't even hit these kinds of bugs But we'll never kill all of them. So we want to kill exploitation methods Want to look at how attacks are performed? What can we do to frustrate those attacks or totally make them impossible to to start? And the thing I try to convince a lot of developers about is we need to introduce these features Even when it makes development more difficult There is a you know There is a technical burden to supporting these kinds of features, but it's similar, you know It's the analogy falls apart eventually, but even with the car industry They have to work around the fact that you know, okay We've got these titanium bars and the side doors now and can I put the window there now? Okay No, I have to work around that but I have to work around that because those those reinforcements are in fact important for Running the safety of the vehicle and we're sort of in the same place here It's like okay We want to make it as easy as possible But there are going to be situations where there's going to be a trade-off to maintain ability or there's going to be a trade-off to performance or these other things and we have to sort of Accept that that is part of the development process. It's not just a thing to be avoided at all costs so when looking at at how we can defend the kernel there's sort of Areas of dealing with with typical exploits And this uncovers where we want to focus our protections Usually modern attacks are not just one bug and suddenly Everything comes apart. It's usually a series of bugs. So anytime we can close individual bugs We can break chains of attack But ultimately we need to know where the targets are You know as an attack you need to know where the target is somehow to inject what you want or find Code that you want to run Locate that code and redirect your execution. So anything we can interrupt in each of those pieces is important And at which point someone says, okay, so this is a big problem. What do we do? Washington Post had a nice article around this which Helped launch this self-protection project a bit like it was being worked on in pieces before that But a lot of this code already exists There it's either in out of tree Out of free repositories like packs and GR security or it's been researched and analyzed and they're in academic papers Or you know, there's a lot of stuff that's out there already that we don't have to invent We just have to figure out how to make it work with upstream And there's a large demand for having these kinds of protections in these you know These are the questions people are asking me and other people's like, how do we get these protections in? So that was the start of the kernel self-protection project We're using mostly this mailing list as as where we organize The second URL is where I announced it and sort of out detailed what we were doing We have put together Some wiki links to describe a lot of what's in these slides about what areas we're focusing on some The rationale of why we're doing it how to approach things stuff like that But this is mostly all about people interested in either writing the software doing testing writing documentation or just generally discussing the ideas that are going on and all sorts of related topics and There's there's also people on the same list working on all kinds of user space protections But that doesn't tend to be the focus. I'm looking at I'm looking at just kernel self-protection because it is a narrow It's a narrow enough scope With so much work to do already that I didn't want to really spend a lot of time working in on Things that the kernel can do to make user space Safer since there's already a lot of people working on those things It seemed that the self-protection concept didn't have a big driver behind it So I wanted to sort of bring attention to that and These numbers are getting much harder to actually Produce reliably since people will move in and out of working on projects and you know people shift between companies And what technology someone's interested in shifts around but I would say it's about ten organizations Working on twice as many technologies and the other piece that's important is This is a this is a really slow and steady Thing it's not a revolution of change It's just little pieces slowly making their way in Getting people to understand how they're used and we will grow from there. It's it's not It's not a fast process So I sort of put together a list of people who've been doing Code doing testing involved in discussions If if you feel like you're involved in the self-protection project, and I didn't put your name here I'm very sorry. My brain doesn't work very well I just wanted to try and get some List of of people and organizations that I could sort of recognize that people have been working on these the project If I have you listed under self-funded and I don't know where you work Sorry, if you've moved between companies between when I made this slide. I'm sorry, but I just wanted to sort of show that this is There's a lot of people actually paying attention to this which is extremely exciting for me I'm glad to have the help because a lot of these areas are extremely Technically deep and are not things I know very well myself So when I can say hey, we need we need attention here. Is there anyone who knows this area? Please step up. We need this desperately. It's really nice to have a pretty deep group of people to to pull from So to quickly look at bug classes like to look at areas we're interested in Stuff that's in bold. I would sort of say is Effectively done for what the protection describes if I didn't put someone's name near it Generally, it's because it happened before the Colonel cell protection project got rolling. So we you know I can't really take credit for Maybe the elevated attention that it's that it's bringing but This is bug class of stack overflows, which is both stack buffer overflow where you've overflowed the functions Allocated stack area and stack exhaustion where you have filled the entire stack segment and you're going past it One piece right now that Andy Ludomirski is working on is moving the Colonel stack into the V Malik area so that we can have guard pages So we can detect stack exhaustion Situations which also includes the removal of thread info, which is a common target But there's a couple other areas here that we don't necessarily have anyone paying attention to in upstream There is this entire bug class of integer overflow and underflow and I have examples of each of these classes as well We spent some time looking at the packs ref count which solves this pretty well It's it's being sort of slowly chipped away at We've got compiler Plug-in infrastructure now on the Colonel so we can start looking at more of the plugins that exist out of tree Let's see heap overflows so this is an examination of You know mostly bounds checking of reads and writes of common objects So a lot of that goes through the copy-to-user copy from user infrastructure and A bunch of people have been helping sort of chip away at the pieces that make up packs user copy Which actually is composed of about three? distinct protections and We've started to land the first of those three now and then also looking at Sort of metadata validation, you know If you're adding something to a linked list you can do very simple checking to see am I actually in a linked list Or have I been corrupted and I'm about to be used for an attack So some of that's going in as well We haven't had too many people look at sort of the concept of guard pages around around heap objects yet Format string injection another common bug class Removing percent n which was the right primitive that exists in format strings landed in 313 That was nice that just changed the entire attack surface from Having a potential right primitive to not having a right primitive. So suddenly it's now denial You know, it's it's you know memory exposures not memory writing, which is significantly nicer But we can do a lot better with some of our compiler checks on on format strings GCC gets very confused about the types of strings that are being used So the built-in protections are not the best, but I think we can improve that with some GCC plugins leaking Sorry exposing kernel Addresses becomes an issue when you're trying to hide where the kernel is or where targets within the kernel are So we had this k-pointer restrict Which was it's a little bit too weak in the fact that it requires a developer to know about it and to choose to use it And that really doesn't meet the bar for what we were designing in the kernel self-protection project Which is it should just work. You don't even need to know about it. It should just work So some of the pieces involved in the packs user copy in the later pieces that we are still working on Can support? Examining where are you about to write this kernel address? Is it going to target a buffer that will ultimately end up in user space? Well, then you can't do that so blocking some of that stuff we need some folks looking at that as well and While it is the bane of anyone trying to debug a kernel attempting to remove kernel symbols entirely from from a production build is another option with the hide sim stuff, but I sort of look at Exposures memory exposures as a much harder battle and not necessarily low-hanging fruit because you need a Strong way to describe what your threat model is and this one's a little tricky So I want to save it for later There's a bug class of uninitialized variables You can actually you know land exploits. I hadn't seen any good examples of this. So I made one but there's a couple different methods of clearing out memory so that you end up with fewer exposures and and You don't have something You know in an uninitialized state or an unknown or you know initialized by something else state Or any time you enter you're initializing everything, you know zeroing out everything you're cleaning up your stack as you exit There's a bunch of solutions here. No one's started to look at this too closely and upstream yet Another one which is usually strongly related with with integer overflow and underflow is used after free and again, we've got some some work done here on Clearing out memory after it's been freed so that there are certain classes of attacks aren't aren't good randomizing the heap free list Can frustrate certain types of attacks? there's There's a lot of opportunity here for people to work on more protections on use after free And then there's finding the kernel at all. Why don't we just move the kernel around and that's pretty good We've had we've had a lot of attention given to Kernel base address randomization Work done on x86 recently landed on arm64 and MIPS And now we've moved on to randomizing some of the static memory locations or not static necessarily But memory locations that are always the same at every boot on x86 is moving stuff like The page tables around moving the V malloc area around because there's a lot of very interesting targets in those in those memory areas That if they're randomized it it raises the bar for attack And then doing Per build structure layout randomization so every time you build the kernel Where the target function pointers and everything are actually get moved around so as as an attacker You have to know information about the build keep track of what version you're looking at It really frustrates things and a lot of this stuff is This is not a deterministic protection, but it's a you know, it's it's a randomization So it's it can be gotten around with memory exposure and other things But it does frustrate a lot of sort of automated attacks or makes it raises the cost of automated attacks Direct kernel overriding so if you get a right primitive and the kernel when you're attacking it You probably shouldn't be able to just write the kernels You know code area this shouldn't be possible. It is still possible on some architectures I You know getting getting this really really well done is Is strange to me like it's a 21st century. This is something that was known how to fix You know, this has been an idea for decades, but it just wasn't a priority because There's this expectation that again Oh, well, you can you can write to the kernel text that's strange. That must be because there's a bug Let's fix the bug like no we need to protect the kernel because there's always a bug This this is you know the simplest way of attacking a kernel once you've got a right primitive So so getting this getting this solve for all the architectures would be great so So once the kernel is no longer writable then you have to look at things that are Allocated more dynamically like function pointers and Julia talked about constifying tables of function pointers This is a similar thing is making sure that as much of the kernel is read only as possible Means you have very few or you know many fewer places where you can attack as an attacker Assuming the prior page has been taken care of as well Some little pieces to this infrastructure are that while we can do a lot of Static and even compile time analysis of things that we never touch we can trivially make const The next class of thing in the kernel that is still writable and has these kinds of function pointers are usually written to only during the init phase so we have This the idea that once a knit has finished then you can make all the memory that That was being written to their read-only again And will continue to run read-only so that's this read-only after a knit piece of the larger Larger idea behind the current exec pieces and the next step of this is Things that are updated only infrequently You effectively make it writable briefly write it and then make it unrightable again so that you know sort of by default It's unrightable and that's that's a kind of a huge infrastructure change to the kernel Even doing our after a knit took You know, I had to enlist the help of a couple people Took architectural support and a couple different architectures needed help on making it work on modules, etc And then getting more and more people to use the this tag Which unfortunately is opt-in, but so be it I move to exploitation this is If you've gained control the kernel if you run code that's living in user space That's by far the easiest mechanism to exploit a kernel So the idea is if your architecture supports a segregation between You know privileged memory and unprivileged memory it should enforce that And these you know this is introduced on a number of different architectures But I can call out x86 is SMEP and PXN on arm and arm64 But it's possible to emulate this protection in software and this has been done on arm now for things that don't have PXN, which is very modern And then there was a gap on arm64 where you couldn't use arms method of emulation But now it looks like we've got a solution for the arm The 8.0, which is most of arm64 now in the future. It'll be hardware segmentation I still need this on x86 though. So if anyone would like to work on that I'd like that emulation the next one is user space data So you didn't run code, but you might be accessing a structure that ends up Launching you back into doing bad things in the kernel So this is you can't even touch user space memory memory from the kernel without first saying you want to This looks very similar these two pages about the emulation look very similar to each other because ultimately if you have This protection you have the prior protection But again, I still want this on x86 Because we've got a long tail of hardware that does not support SMAP on x86 and it's this is a a Big deal like that. That is one of the major ways That we're gonna kill a lot of exploitation methods because this pushes exploitation into finding a place where you can write write and execute memory in the kernel which is very small it should be zero or You're gonna start generating Return-oriented programming style attacks against kernels this raises the bar quite quite a bit by having these kinds types of productions Which is why I want the emulation as well because not not everyone has last year's hardware Which gets us to Rob and at some point we can get to you know full Return-oriented programming protections. There's been a lot of research done in this There are some examples in in packs There's a lot of work being done on you know a control flow integrity Other things like that. We've had pieces You know I put the the jit hardening that was worked on recently here because that's It's similar in that you can you can instruct the kernel Please create the code I want to attack you with and put it in kernel memory and the colonel says okay, and then you run it That's not good So if we can harden that against attacks remove cons Relocate it randomize its position make sure you can't write to it once it's installed etc. Those protections have gone in so to Quickly reorganize the order of these features. I I like covering the rationale for why things certain things are getting worked on You know what it what those things provide what kind of protections they provide but usually I also get the reverse question is like Okay, so what actually made it into the kernel so I've turned this around and said okay in 4.3 The pan emulation and arm landed Ambient capabilities landed and I'm just marking that as notable. It's a user space protection But it changes a lot of how user space can think about operating and capabilities Removes the need frequently for file system capabilities or augments it in a way And sec comp on power PC I'm biased on the second maintainer 4.4 This was a static target in on x86 could be removed 4.5 Again another user space protection, but we were able to control the size of entropy of user space as a lot 4x6 got quite a bit of stuff KSLR on arm 64 Our data was enabled by default, which is what I was talking about for a lot of these You know most distro and and vendor products tend to turn on our our data already But to have it on by default is is good for anyone who's less familiar with what they're building on the kernel and On x86 our data is now mandatory. It is not possible to turn it off. There are no there's no config There are no config if deaths anywhere in the code anymore. So you just get proper memory protections on x86 zeroing of heat freeze If you enable the debug mode on the basic infrastructure for our after-net one end and Execute only memory on x86 for processors that no one can get yet landed But it's there so once someday we get that that'll be nice And that's more about memory exposures. So if you get a read primitive You can't read out the entire kernel and find your attack targets and other things So if you can only execute it that makes your life more difficult. It's an attacker 4.7 KSLR for MIPS slab free list randomization the jit constant Blinding landed 4.8. We got the slub free list randomization KSLR on x86 was expanded to cover the entire physical memory range instead of just the first couple gigs And the work has started on randomizing the various memory bases in the kernel on x86 the GCC plug-in infrastructure Finally made its way in with a couple example plug-ins that don't really have too much security relevance yet, but Those were it was kind of an invasive series of changes. So to have the infrastructure in place means it's now Trivially pos trivially possible to add GCC plug-ins You can you know pluck them out of packs and GR security and run them on a mainline kernel now Which is quite nice. Although usually for many of the Packs and GR security plug-ins. You'll also need a bunch of annotation from the kernel as well So it's not totally trivial, but to have the infrastructure in place is pretty important We have the first step of some user copy hardening and another attack surface reduction thing is now There was a hole with sec comp sort of intentional hole with p trace You had a sec comp filter you could bypass it with p trace. I'll talk more about that tomorrow But that's been fixed and then my magic crystal ball predictions because who knows Hopefully the latent entropy GCC plug-in will go in and that that's designed to get us more like sort of More more state in the random number generator for usually embedded devices or things that don't have good hardware Random number generators the V malik stack on x86 List hardening that I talked about will land in there and it looks like pan emulation for arm 64 Which I'm extremely excited about should be in Hopefully in 4.9 as well, but again, I can never predict what's going to go in And I quickly cover The distinct challenges probably the biggest challenges culture on both sides of sort of the fence of out of tree and Upstream there's a lot here the upstream. There's quite a bit of conservatism on on code changes the example I like to give is that It's a user space protection, but still it's indicative of the of the problem is The idea of sim link restrictions in temps of these temp races on sim links, which has been you know plagued Linux and Unixy systems forever a Very simple clean solution was designed You know 16 years prior to it actually landing in the upstream kernel and there were Five or six people that made attempts over the decade and a half to get that in And it really takes a lot of persistence and patience so On top of that, you know, there's I feel like a lot of the upstream developers need to sort of Accept the responsibility of like okay, we do need to pay attention to this. This is important and We have to accept that it's gonna be It's gonna take work and we're gonna have to deal with the technical burden And then for the people who are trying to get this stuff in you need to get a lot more patience and understand How the kernel is developed and that it's not an instantaneous change and that things are evolutionary in their process And of course we have the technical challenge a lot of these protections are incredibly complex and So they are even harder to debug And There's a lot of innovation involved, you know things that exist already in the world are not necessarily suited for for what's how upstream does its work and You know, there's Collaboration on these changes is a big deal, you know, even if you have fantastic code If you can't describe why it's needed how it helps things why it does what it does You know really documenting these changes is can be a big challenge and understanding that developing against upstream Means you're not writing code for the kernel You're writing code for the kernel developers and other people are maintaining your code Other people need to understand your code and other people are not necessarily familiar with what you're doing So, you know having stuff really Understandable to other people that don't know this is is pretty critical and Of course resources getting more people to help so If your company is interested in it if you are interested in it Getting dedicated people on this and getting dedicated testers, which is which has been incredibly handy, you know, even if you're not You know big into writing really complex Technical things like this just taking a patch set Running it on the hardware you have and saying hey, this works for me or You're a moron you forgot this corner case and it blew up. Here's my dump. Please fix it That's incredibly handy like getting that silence is by far the worst thing on any upstream development So if you post a patch series and you see something like oh, yeah, I should I should really download that and try that That's that's really important and then ultimately We're gonna have to recognize it another major contribution of this is vendors will have released products that have old kernels so getting people to do back ports of Potentially complex and invasive changes and backport them is gonna be another big area of work And that was a lot of slides and I'm done. So that's where you can get Look at the wiki. That's the list. That's the slides I'm not sure how much time I have but If there are any questions I can try to take them now. Yes, I didn't hear the last part of an example of what? Sure, so Let's see the I Think probably a quick example. It's recent is this move to move the kernel stack Onto V malloc introduces limitations on I Mean there you're you weren't supposed to do it before but it did function if you were to use a stack buffer as a target for DMA It really doesn't work on V malloc. So it requires Requires changing the infrastructure of drivers that may be performing those kinds of things. So that that's sort of a one-time cost a ongoing maintenance burden might be if we get emulation of the memory segregation the infrastructure for that tends to be pretty invasive on per CPU on page tables on a bunch of other things and it complicates How those layouts work and how the code works and how task switching occurs? Yeah, okay Exactly, it's maybe not the best example, but but um the come usually it's complexity that's being introduced Yeah, it's it's easy to get a security fix or a security feature in when it has additional benefits This is not always true But that's the the V malloc stacks is a good example of that Of debugability and a couple other things I think I saw you first so I have tried cowardly enough to To avoid that as much as possible by hiding things behind config values so that We I use I sort of use the time frame of Introducing a feature that may have some performance impact but placating people by saying okay it's behind a config so only people are interested and we'll use it and then in a year or two when every single distro and Every major vendor of Linux has turned this config on I can turn around later and say Look, this needs to be default. Yes, because it's already default. Yes for the world in reality So I'm I sort of try to avoid that fight because it's an incredibly hard fight to have I simply have to say okay if it's worth it then everyone else in the world Interested in it will turn it on and we can prove that out later So I've sort of been collecting this long list of configs that I'd like to have config. Yes, but it's like that's not a battle I want to have We should wrap this I'll be around in the breaks people come see me ask questions. Thank you