 So, welcome to this last session of the Linux Security Summit in North America, our first time ever virtual and this is our panel session and we had one panel session already yesterday and today's panel has the same topic as yesterday, so if you have attended yesterday one might be already started to think about the questions you want to ask, so feel free to start asking them already now. And I have not had much time to think since yesterday about how to improve this and make it more interactive and we decided that one way to make it would be that as the questions come in the panel members would pick a question each of them like and they can make a small round table also for the panel members which just randomly picking questions of the interest. But with that let me introduce first our panel member. Let's give them like small virtual round table so each panel member can introduce himself or herself since there may be few words about them. So we will start as like people appearing here on the screen. So Andy you will be the first one. Hi everyone, I'm Andy Ludomirski. I've been working on the Linux kernel for a while now and mostly do XMT6 stuff and I do some security stuff and some code review and hopefully I solve more problems than I cause. Thank you Andy. Next will be Christian. I'm Christian. I work on the Linux kernel as well surprisingly for not as long as Andy has been. Thank you Christian. Admitri. Hello, my name is Nitya Vyukov. I work as top-to-engineer at Google. For the past five years our team is working on bug detection tools for the kernel in particular address sanitizer which finds use after freeze and out-of-bounds access. Also recently memory sanitizer which finds uses of initialized data and concurrency sanitizer which finds data races. Also we are working on kernel fuzzer and in particular on the notorious C-scolar fuzzer which is coverage guided structure where kernel fuzzer and on the CIS bot which is high-level automation for the fuzzer which does continuous fuzzing, automatic bug reporting and bug breaking. And it has found thousands of bugs in the Linux kernel and in some other operated system kernels. Thank you. Thank you Admitri. Emily? I'm hearing a lot of echo now so I think somebody needs to mute their phone. All right thank you. So I'm Emily Ratliff. My professional career in Linux security started about 19 and a half years ago when I was one of the first two people to join the core Linux security team in IBM's Linux technology center. I worked on the first common criteria evaluations, the first trusted computing enablement for Linux, and for the security architecture for IBM's first public cloud. I've worked on Secure Boot for AMD's Mullens chip. I've worked on the core infrastructure initiative and with the Ubuntu security team. Most recently I rejoined IBM working for IBM security to get a view of the world from security as an application perspective. Throughout my career I've worked at all levels of hardware and software with a focus on open source and open standards. And back to you. Thank you Emily. And last but not least, Naina? Hi everyone. I'm Naina Jain. I work as software engineer in the Linux security team of IBM cognitive systems. My regular work involves enabling Secure Interstate Boot on power systems. My most of the work in kernel has been mainly related to AIMA, TPM, and key management. Thank you. Thank you Naina. So I'm Naina. So I have not changed my workplace and I'm very interested since yesterday so I'm going to skip my intro and talk about time. So let's really get to the panel itself. So I want to first start by recapping for people who hasn't attended the panel yesterday and by the way you can already watch it or watch it anytime you want because everything is already available. But just a couple of points I wanted to recap which we basically spent most of the time talking in yesterday's panel and the first point which I think we spent most of the time was really about testing and testing in the terms of our Linux kernel testing and more like lack of testing in the Linux kernel. So which has been given the reason for many of the bugs we have which have been called out so that we have a lot of vulnerabilities, a lot of bugs reported patch release, including regression bugs and a lot of discussion went into how can we improve testing, how do we make sure that the test code which gets in kernel actually has the test page just go. So there was a lot of speculation around that was the first point. Another point was maybe I could kind of summarize it as some lack of focus on the user space. So the point which was brought that we focus nowadays a lot on the kernel security and we kind of completely or greatly forgotten the user space, the sling of the user space, how all of it integrates and quite often some security decisions are pushed into the user space thinking what all the users will figure it out. But then what users tend to choose is that they they don't figure it out. So they kind of the end result is not very good. And of course performance attached, no surprise. So performance security, long live friends. I really like Jan's point yesterday about this burning security issues into fixable performance problems. I really like that idea but I just don't know like how long if I try to propose something like this to my interests like I did when proposing something like this to you saying oh here is a great security patch for your self-system and just create this big performance problem but this is a good one. So we know what your maintainer you can fix it. So I don't know if I might not have too long time to leave on the mailing list. But I really like Jan's idea about this. And there was many other points also discussed but I think this was kind of one of the main ones. Also Mimi talked about this importance of understanding with trust how the trust applies for different things within the context of integrity subsystems. So how it's important to kind of differentiate and understand especially the packaging and power origin. And then we had also Brad's talk today where he gave his view on what he thinks Linux security should be doing in the I don't know next 10 or four years. So I guess we can also start discussing some of that. But now before we go into question and answers mode so I will give again they will make another small round table so that each panel member can bring the points which they consider important and which way they want to bring with regards to kind of the current problems in Linux security and what should we be doing about it. So we will start it in the same order as we started the introduction. So Andy if you could start it. Hi I definitely I think testing is particularly important and not just testing but making sure people actually run the tests especially this caller are excellent at but there's writing tests running tests making sure that they run recently. I'm currently chasing a security bug that's embargoed that embarrassingly we have a test for and the test has been failing for several months and nobody noticed. So we definitely have a lot of room for improvement in the kernel. I don't know quite what the user situation is that involves a lot of coordination with distros. It's a little bit hard to pull off. And can we Christian? Christian I think you're muted. So maybe we can go to Dmitry. Yes can you hear me? So I'm actually going to repeat all of the same roughly. So I'm going to talk about bugs testing and quality. There are lots of bugs in every kernel release literally tens of thousands and lots of them affect security in some ways like memory corruptions or information leaks and security is the weakest thing problem. So logical protections like security modules, containers, integrity and even users and permissions they can be compromised by memory corruption. Then all of those things like CVs are not working or not being filed or fixes not being back ported to stable or vendors are not updating their kernels. I think major reason for those is simply very large number of bugs and fixes. You can file CVs if you have say 10 or maybe 100 of bugs but if you have 20,000 of bugs you can't do CVs anymore. It's just too much work and vendors are not updating their kernels because they're not able to keep up and it's not their fault I think. So you can imagine doing few security fixes per month that's a reasonable rate but if you look at say 4.14 release that's almost 20 patches per day for the past two and a half years like every day without weekends or 550 patches each month and it's not even full number we know that patches have been dropped if if they don't apply clearly and Brett just told us that we missed more than a thousand of fixes. So I'm not surprised that nothing works well at this rate and this just doesn't look right for the most security critical infrastructure project in the world. I think we need to reduce number of bugs per release to orders of magnitude to make situation manageable and to make those security processes even possible. So what should we do about this? I think we need to make testing and quality integral part of the development process end of the project and not try to push it to some third parties so users and wait when they will do this so users and users are very bad testers they don't necessarily see bugs they don't use debugging tools they don't report bugs most of the time and they don't test any corner cases. So for the project it should include things like policies like developers need to test with new functionality at regression tests for bug fixes and drivers need to be testable because if they can only be tested with real hardware it means they are not tested on CIS, they are not tested by developers, they are not tested during stable process. Then static analysis need to be integrated into sending batches and we need green light for people deploying new static analyzers. We also need more SNN tooling and automation because at this scale something that can be automated and needs to be done manually it simply won't be done at all and that's what we see with testing. So this requires unified formats for tests and unifying all aspects of tests actually like prerequisites output formats how to run them in particular that means also that developers need to stop inventing their own test frameworks and systems on the side and also things like unified crash reporting because currently say you can you can run tests but you won't be able to understand if kernel crashed or not in an automated way and in particular this complete such complete automation unification is the reason why CIS bot is so effective. So it may appear that I'm asking to kind of do more work and increase costs but actually a good test in automation in the end that significantly reduces cost of development because we don't need to chase bugs, we don't need to fix regressions again and again and we don't need to do the small follow-up fixes for static analysis warnings and so on. So that's my view and thank you. Thank you Dmitry. Let's go to Emily next. I don't know if Christian will return to you at the end of the round. Are you actually here? I am here. Oh okay so maybe we can go to your back. I hope I'm not breaking off again, I'm sorry. This is Germany's internet, I tell you. So yeah I mean I was expecting that Dmitry would cover all of the technical details with CIS caller and so on. Christian I guess we lost you again. Okay so maybe Emily let's go to you. All right let's hope this is more stable. So as I mentioned in my introduction it's been quite some time since I worked on the Linux kernel so I wanted to take this opportunity to give my three wishes for the broader Linux security ecosystem beyond the kernel. I really enjoy coming to these things and hearing the latest of the innovative solutions coming out in the Linux kernel and it brings me great hope and optimism which lasts until October when we engage in cybersecurity awareness month and I find out that once again the latest state-of-the-art advice for end users is to not click on links right so it makes me wonder you know are we really making progress. So when I think about the future of Linux security I think about how the path of code to the user the paths are just proliferating so in the past we've had the Linux distros be a curation point where users expected them to to determine which are the most usable and most useful open source security projects. That may have been an iffy proposition. Some distros did more curation and some did less but they were still commonly considered a curation point and now with you know all of the different ways the Docker containers and there's many more ways through and so one of the big problems is how open source projects signal to each other that they are doing the that they're developing with the security in mind and so the Linux foundation did create the the badging app and it is still ongoing it's been ongoing for the for the past four years and this past month curl and the Linux curl and the Linux kernel became the fifth and sixth projects to become bold badge. The importance of the project is not necessarily in the badge per se but as a conduit for the discussion of what makes a secure open source project. So my wish is that you know the the badging app gain more visibility more projects and have more people contributing to that discussion. So my second point that I'd like to talk about today is about resources that are out there that are available to open source projects and so you all may be aware of Mozilla's MOS program and its associated secure open source program which funds audits for open source projects that Mozilla and Firefox use but I also like to make sure take this opportunity to make sure you're aware of the open tech fund which has as a mission secure communications and censorship circumvention and they also offer as part of their red team fund part of the core infrastructure fund security audits for software that support that mission and so my second wish for the future of Linux security is that more projects take advantage of these and then the the last thing that I'd like to talk about is that even as Linux has really taken over the world you know it's in safety critical systems and IoT devices running massive cloud infrastructures knowledge of Linux security is really not keeping up with the widespread proliferation and especially in the top level you know the highest level of applications or projects intended for IT security operations you tend to see these projects coming out with you know fully populated information about windows security and then with the TBD yes we know we need Linux security and that's coming soon and we really appreciate your contributions and so my my third wish for Linux security is that the you know knowledge of Linux security for end users proliferates much more widely and keeps up with the adoption of Linux itself so those are my three wishes back to you thank you and Liana yeah thanks Elena so we had some great discussions in the yesterday's panel and Mimi Sohar has brought this question on the need of file provenance and now in today's well our whole world is evolving around technology the technology has played a significant role especially in this time of pandemic whether it is cloud infrastructure for online shopping AT&T networks collaboration tools or user and devices now and we also heard but a lot of cyber attacks happening this time so in such a scenario could file verification would have played a very significant role in building our trust on the software or the applications or the firmware which are getting run on these devices or on the infrastructure on which these devices depend on but now the need of signature verification implicitly brings with it the need of asymmetric public keys and then the similar question of trust will apply on those keys which are used for verifying these signatures so in this global world where every device has the firmware stack or application stack operating system from different vendors by default the signers of these different binaries there are different owners of the keys so I think in today's discussion I would mainly like to emphasize on the issues related to key management in context of the kernel the key management is a full entrant cycle from creation provisioning retention to revocation and one of the problem I see is how kernel trust asymmetric public keys which are not owned or built within the kernel so I think I first realized this issue while enabling secure boot on open power systems and in this case the interesting thing was that our bootloader itself is kernel and the keys for verifying the actual hostwares is being owned by firmware but the kernel do not trust firmware keys because they do not match the trust requirements of existing kernel key links so the addition of a new platform key link resolved this issue by isolating kernel verifying keys from other key keys in other keys in the kernel and was used only for kernel verification but this was a small step that was used only for the purpose of using firmware key for the kernel verification and there's still an issue on how to how does the kernel trust the keys from various other layers if those are not signed with built-in keys so is there a possibility where kernel can trust something from outside and load the keys dynamically is there another route of trust which we can depend on like trusted hardware or other external key management servers how does the kernel trust any external source and then additionally how do we ensure that our keys are getting having the relevant checks which are needed as per the x509 specs or any of these specs and then is the revocation of keys happening as needed is it handled sufficiently or are we even doing it sufficiently so I think these are the main issues which I would like to get discussed into this call and and then I would like to say that most of these issues we are seeing in server domain and there are people here from various other domains like IoT embedded containers so it would be interesting to know what type of problems they are facing in the similar context and then we can see how can we come up with a flexible solution in the kernel for handling the key management issues so thanks Elena thank you Nina so Kristen are you able to talk now or okay so this has been a round table and actually like usually nothing goes as planned so by the time we round table in yesterday's panel I finished they had already in the queue like millions of questions now I can only see two of them so at least looks like I can encourage people to ask more questions you get actually a chance to be answered this time much less in general and we have this very bad background now I think it's got better okay so maybe since we have only two questions here so I can ask these questions to the panel and then we'll see how we go from there so the first question is actually on the security and crypto so the question is security and crypto subsystem seems quite a far apart because it's things like key management being artificially split in two for example there's no easy way to accommodate crypto acceleration of encrypted fees and the tpm doesn't count because it's slow and this means that most of the times the keys are lying around in the plain text so a question to panel members how do you see the situation improving does anyone want to comment on this I can take it sure okay so I think that there are basically two points here what I'm seeing is one is related to the crypto acceleration so if I'm understanding the question right there is one thing is like whether do you offload the computation to the crypto accelerators and the second thing is the keys are lying around in plain text so can we use the crypto accelerators or the probably hardware security modules for the for storing those keys so I think if you see the latest work which has been happening so there was cpm being used for encrypted keys that was for symmetric keys and there has been recently some patches in area of tested execution environment to use that for the encrypted keys so I think people are trying to use no more on the tested hardware site for encrypted keys but which is like symmetric keys and I think that is the same point now we how can we use them for us symmetric keys also so I'll say the situation is improving I think with more and more secure hardware type of things coming up now there are more people people will be trying to use them for the and their patches coming around probably as there are different varieties of hardware might come there might be a need of a generic layer which can be tough transparently and do the underlying thing and mobile you can see around in this in the thank you does anyone else want to comment please question next question isn't very different area so next question are did the next channel really meet the 80% statement test coverage required for the CI silver gold badge it seems unlikely other frameworks are for example piece and my man would place well enough scorn or at the end of the maturity lever journey I don't know what this piece I can take this question so one of one of the nice things about the the badge app is that the evidence is made public as as as the questions get answered and so if you go to the badge app and you look at the entry for the Linux kernel it that's actually one of the unmet criteria it's with the comment Linux tests kernel tests individual feature functionality not code branches and is generally only new features not older positives like functionality and so the lack of meeting that criteria did not prevent it from getting to the gold badge you don't have to match everything in order to get to the 300% level to get to the to the gold badge and it I mean it's entirely possible that other maturity models wouldn't agree to to that level that's the beauty of having different different maturity models and it's also the beauty of the Linux badge app or the the CI badge app program and you can go in there and have the debate about whether or not it should get that gold badge without that 80% coverage I also have have an answer so I am not working on CI systems but last September I asked all main CI's like kernel CI, CKI, LKFT about kernel coverage, code coverage and none of them had any infrastructure to even collect coverage so I suspect the answer is that nobody really knows what is the coverage and so I know some some number some coverage numbers for syscolar it's somewhat hard to assess because we we work on the compiler level not necessarily statement level and we don't cover say unit functions and say if you're testing on the x86 architecture how do you count other architectures doing it to take them in the total number of statements not but I get a number somewhere around 8% so sysbot covers about 8% of statements that were compiled into the kernel but one remark here is that as it was mentioned yesterday sysbot actually doesn't do any testing of the kernel in the traditional sense it only tests for basic safety violations for example if you create a socket and the kernel it said delete all of your files that perfectly fine with sysbot like it didn't crash so there's I mean there's also different degrees of coverage like what does it mean to be covered by what does anyone else want to comment on this one and I hope you're not leaving us moving no I'm still here just getting out of the sun okay so if no one wants to comment on this one let's move to the next question oh I if you can hear me yes I would want I would want to I have a question for Dimitri actually because we talked about this before do you do you feel are you happy with the impact you're having right now with sysbot syscolar and casey sun and so on that you have on am I happy with impact to some degree I would say so we definitely lots of box getting fixed because of our work and that's very nice and I'm happy with that on the other hand I would like tests to be added for the box fixed for example and that's usually not happening in most of the cases I would also like more box to be fixed because we still have I think now about 600 open box so I don't know I frankly it's hard to say if I'm happy or not well I guess mostly yes there's there's one bug in the console code that Dimitri that syscolar keeps hitting every couple days and they keep sending us reports and nobody's fixed it yet so someday we'll fix all 600 bucks so Christian while while you are able to speak so do you want to try to again present your point of view which you didn't I'm very happy I'm very happy too and I'm sorry for all the problems I'm causing but that's what I'm known for and joking um one of the things I tried to make a point before is that a lot of us have just developed a domain is how it was sold and what it's breaking again maybe you can type your point no you can try again so it's like starting and it's good and when it just has breaking am I back now maybe if you turn the video maybe it will help now we can just hear someone's typing and we completely lost Christian I think okay this is really the downside of us being virtual so I'm I'm truly hoping that next year we can actually make a nice panel with members being like Christian are you back or yeah I try to be but I hope am I breaking off no not okay so we think we always tend to think that or we used to think that there's inherent conflict between security and performance and performance has traditionally mattered a lot on Linux and it's good that it matters like that there's nothing inherently bad about this but I think this this false sort of conflict has sometimes held us back and and there is another aspect we need people that can tell us I think the difference between security theater as some people like to refer to it and stuff that actually improves the the state of the art of uh kernel security and I think that is actually a problem we don't have a lot of people that have that can provide the quality of review that is needed uh across the board for new kernel features and so on that can tell us first of all this is introduced a significant security pack just based purely on logic analysis it sounds like he is still trying but I mean that that's a discussion that I'd really like to have because I mean it's one thing to talk about uh security theater at the kernel level but then um you know what's important to users as well because I mean sometimes we spend so much time focused on securing the system that we lose track of we're still allowing things like ransomware for example to uh that that really impacts the user and so it's not necessarily you know all of the security controls functioned but the user still lost their data so I think you know while we're having that discussion about what's real security and what's what's security theater we need to focus on the goals as well yeah I think that's actually a very good point and Christian are you able to continue or okay so no but I also agree actually with Christian's point that we're getting more reviews and I think Brad has uh to some degree kind of talked about it today saying that we don't have enough expertise for example like people like Jan Horn who is reviewing patches but not necessarily like from point of view just reviewing with patches but staying with how efficient this special mitigation could be against certain exploits for example or against certain threat models and it's actually not a trivial task to do because it's it's very hard it's like very easy when we bring a security patch forward I mean we can measure its performance and it can be horrible or not horrible or something in the middle and but it's something very kind of easy to measure but how when it comes to security it's very hard to measure how efficient a certain mitigation on or I mean you can say that yes this this mitigation stop this particular let's explore it from working with particular technique but it might be even if you close the technique exploitation technique where it's very easy way for a person attacking this to just go and try something else and it's it's it's just like it might be even not worth the amount of effort which has been put into it so I think that's actually really a big problem so if you could get more reviews from people with exploitation knowledge on the hardening techniques which being proposed and stuff I think this would be very valuable I don't know if also this is the point that Christian was the one of his points that he was trying to bring but at least this is my part take on it so okay so does anyone wants to comment more on this one and maybe you're from the crown side on this I don't have anything I don't have any amazing fixes here getting people to do code review is hard it's it's a lot of fun to go and write new code and make things work and it's not quite as much fun to go sit and read code that you had no direct involvement in and that nobody's paying you to read and to say let's make this better or let's see if it's already great so it's a tough thing to recruit for so what do you think from the point of view um because you're kind of representing the kernel community maybe not so like security related kernel community so what would make you being more interested in reading let's say new patch series and some security hardening feature so what should that patch series how help us to kind of how should we frame this start from very strong case point to existing exploit or how to make it more appealing let's say to kernel maintainer I I think that's hard I don't have any magic fixes I think part of it is just recruiting people who find this to be fun there's there's certainly a community of people who have a lot of fun breaking things and I think the Google project zero people are an example of this and it would be great to try to recruit more people to see their own role as breaking things in the Linux kernel and maybe even breaking things that haven't been merged yet but what about back to kind of to maintainer position I mean maintainers are usually very loaded so there are like a lot of features being developed things like bug fixes and stuff everything when we start schedule and when there is this people coming with some hardening patches or something like that which might be like you know out of your normal cycle and things like people looking so this is what I was trying to ask but how to make how to make how how we can kind of frame this patches in a way that's making it easier for maintainers to look at them like finding time to look at the security patches because in in many cases it's ultimately up to maintainers to take certain changes in I mean they might be security changes and patches might be proposed but the maintainers need to find time to look through these patches and the security patches might not be like you know with usual ones one thing one thing that's kind of specific to hardening patches when when someone sends a patch to enable new hardware or a patch to enable a new feature or a patch to fix the bug it's it's very clear what the patch is doing what the benefit is why we would want to merge it when someone sends a patch to harden a little corner of the kernel sometimes it's a little hard to see how that fits into the big picture and the a lot of the patches we see are hardening against certain exploit techniques and I think a lot of the maintainers and a lot of people in general don't have the clearest understanding of when an exploit technique matters so as an example right now in the x86 space we're seeing we just saw patches to harden access to the cr4 register on x86 and I as a maintainer don't necessarily have the clearest idea of what precisely we gain by doing this and it may be we gain a lot it may be we gain a little it may be that this plus a few other things in the radar down the road will give us a big advantage but sometimes it's hard to see what we're actually accomplishing and clearer descriptions in the patch maybe feedback from people who are in tune with how exploits are written would help with this if someone came up and said hey I exploited the following bug or would have exploited the following bug using this technique and this hardening patch would have made a difference that would be huge thank you Andy does anyone want to comment on this or continue yeah Kristen yeah I would like to try to if you can hear me yeah you can hear you wow I called in over the phone um so uh I think Andy is pressing a really good point we have it's like a lot of times when when I see hardening patches come on to the list it's like and they need to explain their threat model and it needs to be clear what is the the larger benefit for all of the kernel is there like really something that we're protecting against or is this just yeah as I said before security theater and I don't I honestly and even in in core kernel stuff I'm not always sure that maintainers or developers of features are the best people to actually uh to actually just judge this um a lot of time it depends on me cc'ing people that are that are have written exploits in in this area for example well in this case my example is obviously young who I always see when when such things come up so we don't really have a community of of uh of people who know their way around exploits know their way around uh security and security research and and that's kind of a and that's kind of a problem and and that's the point I tried to make before in my introductory introductory statement is in order to portion your security features I think relevant security features you obviously also need to have some let's say let's put it like this cloud within the community right people need to trust you people need to kind of recognize who you are or that person has taken on responsibility in the kernel I I know if stuff breaks that person is going to be around like I can rely on stuff being fixed um and so it's sort of when you have crossed that threshold I think then it's much easier for people to say okay this is something which we haven't done in the kernel before and I can kind of see the benefits but I can't really I can't really analyze it myself but I know that the guy who is pushing this like or the person who is pushing this sorry I can trust that person so I'm fine with with with pushing this feature upstream and that means in the end becoming a maintainer becoming a maintainer in Linux and that's that's not necessarily a job that a lot of people enjoy I think yeah I think it's actually a very good point especially about this I think the community for people who are experts in exploits I mean the community exists but the community is not connected to our community very much and that's a problem and maybe that's also I think what some finger spread has called out and in a sense that these people they exist but they they are they expected to be like his point was that they expected to be paid and and and they're there in some other communities and and they're not commenting much and when one I mean Jan is not scaling well for all the all the needs that um which I might I don't know if there's if I don't I don't personally know if it's fair to say that people and I guess that there wasn't a there wasn't a whole point of of this discussion but I don't know if it's it's fair to say that these people just are in it are in it for the money I think there are a lot of talented people out there who are really interested in writing exploits just because they enjoy breaking things or making things behave in a certain way that they don't expect them to behave and a lot for a lot of them I think it's it's more of sort of if there is money in it then that's probably fine but I think that's not necessarily I would think that's not necessarily the mindset that these people have the same way I don't have the mindset of developing new features uh um for money but I did it because it I did it because it's fun it's much more um it's I think this goes back to Andy's point it's just not fun to review other people's code per se right I mean you have to sit down and then like there is a patch series it doesn't matter if it's on on github or github or if it's the mailing list patch and you have to sit down you have to look at this code you have to stare at it you have to understand what's going on you maybe have to apply this patch to your tree to see the context of the patch and so on and then you have to think about all of the cases where this can break and it's just not that's way less fun than sitting down and staring into assembly and making a breaking hardware that's way more fun I think that's ultimately the problem convincing people that they need to do both yeah very good point does anyone else want to comment we also have um maybe we should also get back to our queue uh let's see oh does anyone wants to bring something out of queue from our panel members I have a question for me specifically um so it's about uh one second it's about future of sanitizers uh the roadmaps or any plans that we have and in particular about Hwasan and RMT so RMT is a hardware technique that effectively gives you address sanitizer capabilities in in CPU at very low cost it uh can protect from use of the free and out of bounds box and we have very uh large plans for it and I think it will change landscape of of this memory corruption exploitation very significantly because it can be simply enabled in production all the time and so for ARM it's it's already the specification is published and I hope we will see actual CPUs with MTE soon maybe within a year or so and we definitely hope that other vendors will also do something something similar in their CPUs following ARM then the next sanitizer that we have in plans is it's currently called Keyfence or previously it was called Gwip Asen I did presentation about it on the last plumber's and it's a tool that gives you also detection of use of the free and out of bounds at literally zero cost but with very low probability so let's say we sample one out of a million allocations and protect box only on that allocation and the idea is that you can deploy this to the whole fleet of I don't know data center or all of your devices or all of your I don't know IoT devices or phones and then this gives you the scale the probability back so in the whole fleet you can detect all of the bugs that happen again we also have future plans for trying to do something similar for other bug types for example probabilistic low rate detection for data races or for other types of identifying behavior that's it so I'm I'm very excited to control flow integrity become a widespread thing um in the x86 land we have Intel CET specification which may or may not show up in computers near us sometime soon and CET gives us strong integrity on return it gives us very weak control flow integrity on indirect calls and I would love to see some of the especially LLVM efforts to build stronger control flow integrity combined with features like CET give us something that is overall very strong unfortunately I don't think anyone has even tried to write patches to make this work in the kernel and doing so is going to be a mess because x86 is a mess but someday this will happen anyone wants to comment on this what's this what's the status I think I saw case briefly speaking about this but I didn't catch all of the the whole talk what's the status the status of CFI if it's going to be upstream soon thank you as talking yesterday the status is project code that lives on on on the androids that also didn't catch all the details on on the so as far as I know CFI is deployed in android for some time now and for now it's in the process of being upstreamed and integrated which is a long process and it takes lots of fixing and also doing link time full link time optimizations full kernel build as a prerequisite so it should happen reasonably soon it's in progress and I agree it's a very high marked technique as well I think we have one more question I don't know if we have answered this does having an LKDTM test a company a new hardening patch help in disregard reviewing acceptance I think just having a test with every feature is kind of really what we want it helps with it helps with any feature I get if I have a test that I can run notice I can verify a new feature that I see being upstream that's obviously pretty great and that's by the way I think to some extent also kind of a I guess it's the way how you have been socialized with writing with writing patches but I see especially from from from my generation or younger generations people pushing patches upstream and they usually have tests they know that case self-test exists that Linux kernel has some some test suites and so they usually send the patch test along and that's extremely helpful because it gives you the confidence that the person not just the compile test has to change that they did but also that they've actually written a test and verified that that change works so I think in general this just gives a lot more credibility to when you send the patch serious okay does anyone want to just tell a comment though let's see what do you have else thank you I wonder now that we have I have a question I wonder now that we have for example Casey son which is the newest feature that we related to I guess just call her um if if this is going if this is going to be hard like this is going to have a lot of impact um in the kernel because I hear that there's like a lot of bugs that are sitting in a cube yes there are lots of bugs and the story with Casey son is very difficult so it's fine data races and the problems that kernel has lots and lots of what's called benign data races and I take it in quotes because from the point of view of the c standard which is the language the kernel has written is any data race is undefined behavior so it's very bad buck like use of the free but in the kernel they are considered benign and so far there is no agreement on removing all of them so what happens with Casey son it just traps on all of those well kind of intentional data races that are not really considered as bugs by developers and we have lots of those currently in the queue and it's very hard to find actual harmful bugs there and it's unclear also how can we deploy this on sysbot because sysbot is it has automated reporting and if we will just start reporting all of those then like I think it will not be accepted well by majority of developers and there is no way to filter only the harmful ones and also fixing them from the kernel is lots of work especially um while there is no kind of green light for for eliminates and all of those because it currently really depends on the subsystems some subsystems are much more welcome fixes for races and want to fix all of them and some subsystem like you then have lengthy discussion with the maintainer about the data races and they don't agree to take any fixes but on the other hand we know from from address sanitizer that actually lots of the bugs lots of the use of the freeze and even out of bounds they caused by data races we see that because say free happened in one task and access happened in the other task or we see that it's not reproducible well or we see that it manifests differently uh so most likely it's a data race I would actually go as far as saying that data races are the major source of bugs in the kernel so in the end to like cases and would be super super useful but currently we can't take advantage of it so we would need to to figure out what to do with benign races and like if you're asking me I would say that we need to do well fix all of the benign races just because that will allow us to use the tool so regardless regardless of the standards and like if they can really be harmful or not we can just forgot that and fix those just for the tool itself so there is no what you're saying is there is currently no consensus on what is with benign data races yes I think we have mostly answered two questions there is one question about like what do we people think about scorn security bug bounty so paying researchers for proof of concept exploits so we have discussed that there is like there is lack of interest or maybe like not enough participation from exploit writers but I don't know how can we possibly answer this question because this so it's I guess it's a proposal to run a security bug bounty on Linux kernel itself but I think the financial aspect of this is something I don't know I don't even understand how this would be run so I don't know does anyone want to comment on this one I think it to gain conflicts somewhat with the number of bugs so usually those programs run for projects that are ready to you know they're best for testing for fuzzing and then there may be few security bugs and it's reasonable to pay for those and so it's harder to find but today say if somebody takes hundred of use of the freeze from sysbot dashboard and copy paste them into the bug bounty submission form like and what if hundred of people do the same I think here that the idea would be not just the bugs but you would actually write proof of concept exploit and you would probably maybe I mean it doesn't have to be stupidly like anything except that you can kind of have to show that the technique is new enough and then something like you can try to make it more clever but I still kind of don't know how far we can go with this so I know lots of bug bounty programs say for home they don't require proof of concept so you can get more money if you if you create full system exploit and actually prove but they usually pay for just use of the free because like it's it's too expensive to create full exploit and I don't know how we are doing in time because it shows one hour then to have just one hour and this is supposed to end so are we over time now already or I think I would just bring one point where we had been I think the Christian Christian brought this that that it's not easy to do review it's not fun to review others patches which I agree very much so I think one of the things probably everybody can help with is like it's not easy to review actually but it's probably easy to just take the patches and test them and ensure that even if somebody else has posted the patches for for a particular architecture they can test and then make sure that it doesn't break their architecture and share their tested by because that seems to be easier than actually doing the review but then it also ensures that there was somebody else had also tested on different architecture and one change did it impact something else or a new bugs are not actually got because of that so probably people can actually contribute by doing more of testing the patches before even they are upstreamed or accepted. Yeah thank you man I think this is actually a good point and I think case is also usually on case repeal is here looking always for volunteers to test in different architecture so that's really a second sweat point but I think we have a tough time now so I would like to thank all the panel members for an interesting discussion and being with us today and also for all people who have asked questions and I'm going to hand it over to James to close your mark. Thanks Alina can you hear me yes you're able to hear me okay getting some thumbs up okay thanks that was a really great discussion to close out there with and you know I'd really like to thank everyone all of the attendees who are online now we've had really good attendance we during Brad's discussion this morning we had over 300 and you know we've maintained well over 100 throughout the throughout the conference I'd like so you know people out there watching later as well thanks for for checking this out the speakers who put in proposals and went through all the processes for and the uncertainties thank you for that a special thank you to all the panelists who joined this is something we arranged the last minute and thought would be good to try and make this a more collaborative and more of a conference feel for something that is a virtual event and I thank great thank you to Alina who I think has done an excellent job shaping a really productive discussion and we have a lot of the really core people who really understand Linux security and who work in that every day participating we've had a diversity of input from from people slightly outside that group too which I think is really important and I think you know there was some discussion at one point whether we were going to have this conference at all this year due to COVID and I think that it's been important that people have been able to present the work that they've been working on and get that out to the community for people to be able to ask questions and to have these discussions so it's not optimal it's not as good as in person for many reasons but it also I think perhaps has allowed others to participate this year who may not have been able to in the past so this is something you know regardless of COVID status next year we'll look at possible online participation and you know think about if that continues to make sense and I think also you know we have to be able to adapt to the world changing that is us the you know the Linux community and the security communities and in fact you know now it's possibly even more important to be focused on security defenses given the amount of critical functioning that has been going online through through COVID certainly you know we're seeing reports of attackers targeting companies and organizations and people that are now moving to much more online work just to also mention that Linux Security Summit Europe will be happening the CFP is open until the end of July and I certainly encourage people to still submit talks to that Alina will be running that conference as she has for the past couple of years I'd like to thank Linux Foundation who were able to bring all of this together online especially Gillian Hall who really takes the lead for us on the Linux Security Summit and Angela Brown of course who heads up all of this and Trisha and all of the others at Linux Foundation and the engineers that we've had working on this today so with that I guess you'll be able to review these videos and slides online shortly after the event okay thanks to everyone and I hope to see you at the next conference