 Okay, hi, hi everyone. All right, we have a really big topic that we could spend the entire day talking about, so we're just gonna start talking about it. I don't wanna miss anything, so I'm gonna keep my notes here. And because I am one of those people who always forget to introduce myself, I'm gonna just get that out of the way as I do that constantly. I'm Lisa, I work for Cockroach Labs, which is the company behind Cockroach DB, a distributed SQL database. And I am here with a panel of experts on security, container security, supply chain security, and open source in general. So I am super excited to introduce the panel. In fact, I'll let them introduce themselves. Liz, why don't we start with you? Hi, yeah, so my name's Liz Rice. I am Chief Open Source Officer at ISOvalent, which is the company that originally created the Cilium project, which you might know for sort of EBPF-based networking. I, in the past, wrote a book on container security. I've been very involved in that kind of container and security space for quite a while now. I call this the Madonna of Container Security. I think that's because of my Twitter picture where I've got like a little microphone. So this is, I say I'm head of insights and analytics at slim.ai, it's a SaaS platform where we do container intelligence, container optimization, and container security. And I'm Josh Bressers, the Vice President of Security at a company called Anchor. We do Next Generation, SCA, SIFT and GRIP, our open source projects many of you may have heard of for SBOM and vulnerability analysis. Okay, so are we giving up on that? No luck with the slides, slide? I know people are gonna come in late, they're gonna wonder who we are. Okay, and just in case you wanna keep this conversation going in the hallway or after, I will be, I'm also a CNCF ambassador, so I will be at the CNCF booth later today, and you will all be around the hallway checks because nobody has a booth here this year, right? Okay, so you'll just have to find everybody in the hallway or on Twitter, which you'd be able to see the addresses of if the slide comes up. But let's start with the elephant in the room. Is open source software more or less exposed to supply chain attacks? It's always gonna be hard to know because who knows? What's in proprietary software? At least with open source, that's what's in it. That's true. 95%. Yeah, I mean, they're all gonna be subject to the same kind of attacks. They're probably all using the same kind of components. Everybody's building with languages that are generally open source and have a large amount of open source componentry to them, so I think it would be very misleading to imagine that open source is somehow more vulnerable. Yeah, I'll say that the world of open source and containers is vast, varied and complex. And there are different pockets of density. There are certain maintainers, certain communities doing a better job than the rest. But as Liz said, it is such a vast unknown of things that are happening, and it's such a huge space as well. We hardly know. We hardly know the answers to that question. I'll turn it upside down. I'll say it's no worse or better than proprietary. I'm fairly confident, and if you look at all of the statistics we have, everything we know that all software is fundamentally the same, I think, at the end of the day. It's easy to say open source is more secure. It's easy to say it's less secure, but I'm skeptical we have data that can say either way. And my suspicion is they're all the same. Wait, okay, so Josh, what's the weakest link in supply chain and container security that people aren't thinking about right now? Is it containers? So our title? The slide we don't have? You're saying the container is the weakest link. It's not at all. I don't think you can say there's a weakest link anywhere, it depends on how you're using your open source or closed source or whatever you have, right? And it's like any supply chain that if you have a weak supply anywhere in that chain that obviously becomes your weakest link. So for example, let's say I'm using container images and I'm just pulling a random image I found that I think does my job, that could be my weakest link. I could be using a trusted container image but then if I'm pulling some random NPM package that no one knows where it came from or what it does, that's my weakest link. It all depends, there's no one answer which I think is one of the enormous challenges we have is a lot of other industries, a lot of other places can point at one thing and say like, this is what we should focus on. We have to point at everything because we literally don't know until we go looking and that's really hard. Okay, so I could add into that that perhaps humans are the weakest link. You know, everything that we can automate, speeds things up, gets things addressed but where we rely on humans to make decisions or to manually do any kind of part of the process of mitigating or patching or anything like that, probably some humans are gonna make a mistake. So unfortunately we're all the weakest link. Prone to error, maybe less than, more than machines, let's say. Maybe if I take that question as well, is to me the more I look at the data, the more I realize that the unknown, the unchartered territory is huge when it comes to, for example, container-based CVEs. And when you look into the amount of software, the sheer amount of software that we've been putting out there, Josh had a great blog post about the NPM ecosystem alone and I think the numbers are correct me if I'm wrong but we were talking about 2.3 million NPM packages in total with the different releases, different versions that comes up to 32 million and we're adding about a million new packages every month. So that's the sheer amount of that ecosystem alone. And if we wanted to review this ecosystem regularly, let's say that we take 10,000 of these packages and throw in maybe a thousand researchers, how long would it take to just look into these and how long it would take to do the entire NPM ecosystem alone? And I think the calculation was like 3,000 years. It was a long time, yes. Like human scale, okay? If we want to do this, if we want to swim against the current and say, we can throw in more humans to this equation so that we're ahead of the curve, that's not happening. We need to rethink this. We need to think about navigating complexity by embracing automation, not through more researchers, more biological intelligences. Into the system, if you will. Okay, so automation. I mean, what are, what other immediate steps should engineering orgs or really anybody who's involved in the DevOps delivery motion should take to protect against these risks? I'll jump in. So from my perspective, I have a background in kind of from, I was at elastic at one point and there's observability. When we started deploying a lot of infrastructure, I mean a lot, like thousands of machines, one of the first challenges we came up with was just understanding what's going on, like how many machines are running, how many machines are having problems. And we created all of this observability technology. And so I think now we've reached a point where we need observability for software. And this is where you want to reach in, things like, you know, software build materials is a great example. Look at what you have. And I don't mean you necessarily go inspect individually every single package and things like that. I mean, this is where the science of observability comes in. You're looking at things as a whole in aggregate, right? You're not looking at individual results. You get a feel for what do I have, what's going on. And then you can start looking for anomalies, for example, and trying to figure out, okay, something weird is going on, human go look at that, but we can ignore the 35,000 other things over here that seem to be going well. Okay. All right, so Aisha, if you're our data scientist here on the stage and in your role, I know you've been studying this issue for a while and you did a keynote on this not that long ago. So what is the data telling us now about supply chains and security and what trends are you seeing and what have you learned since your famous keynote at KubeCon six months ago? Yeah, things are changing really fast. So let's maybe look at this, let's look into the near past and present trends extrapolated to the future, maybe talking about gen AI and its implications because I feel like it is the inflection point, especially the last couple of months. So it's slimmed out AI, we've been scanning millions of containers. This year alone, we have scanned more than 3 million scanners as containers. And that gives us leverage into looking into how big the challenge is and how all the effort that we are putting in in the last couple of years is paying off. Especially in 2022 in the aftermath of multiple security incidents. You know what I'm talking about? There were some things that happened, yeah. Some things have happened. I guess it is fair to say that 2022 will be remembered as the year of software supply chain security. A renewed sense of industry wide awareness. Customers, governments are demanding zero vulnerable fees in their software. Counts of effort going in. So bottom line up front, we've been putting these container security reports out there in the last 18 months. There is not a single category of containers or container categories, container distros. That has less vulnerabilities to date and dated six months, 12 months, 18 months ago. So it is not just the vulnerabilities, the CV counts also alone. The component complexity, the layers, the sizes of these containers, how long it takes to scan them using open source tools. Even the metadata about these containers, their S-bumps have increased by about 50% in the last year alone. So it is not like we are not remediating vulnerabilities. It's actually the other way around. There's a ton of effort, but for every vulnerability that we remediate, we detect four times more vulnerabilities. And these repair rupture cycles are so slow, so much so that when we see a CVE in, let's say, publicly available top containers, the likelihood that that vulnerability gets resolved in the next 180, 195 days is less than 20%. So we are working, there's a lot more efforts going in, but the influx is huge. And we are, like with this space, with the same tools and techniques, we are not getting ahead of the curve. So the challenge is out there, but this is not the end of the story. This is actually the beginning of the story when it comes to the bigger picture. There's an underlying current here that I would like to talk about in a bit, but it is this unknown charter, the things that we don't know can be actually quantified from a data science perspective that we can talk about when we talk about the future. But I'll leave it there. It's not a very optimistic perspective right now, especially if we don't change our perspective. We promise we will have some good news by the end of this. We'll make sure everybody goes to sleep nice and safe and sound. So, and speaking of people that are doing some things about this, and sorry we don't have our slides so that you know who we are, so I'm just gonna kind of keep introducing people. But probably the person that needs the least amount of introduction up here is Liz, Liz Rice from Isovalent Now. But Liz has been a driving force behind some of the leading open source container security tools. And now at Isovent is working on the Scylian project. So do you wanna tell us about ABPF and some of the really cool work you're doing and how tools are solving this problem? Yeah, so I mean Josh mentioned observability and ABPF is an incredible technology that we can use for extracting all manner of observability information as well as security and networking. I mentioned Scylian is primarily known for networking and there's a whole rabbit hole I can go down now where we can talk about runtime security versus supply chain security. On the networking side, we all expect firewalls. We use firewalls all the time. In networking, runtime security is the only kind of security that we do. In sort of executables, in looking at things like file access or privilege escalation, what have you, for whatever reason across industry, a little bit more reluctant to turn on kind of runtime firewalling in the same way. But we do have the ability to use ABPF to enable that kind of thing in the future. And today, things like the observability, the behavior, there's a really great project that we have as part of Scylian called Tetragon that's using ABPF to observe security relevant events. So things like file access or network access or privilege escalation and at least give the data on when those things are happening that are unexpected, when you're getting access to sensitive files, for example. So that's not really the supply chain side, but I think it's a very important part of the future battle to protect software. I think the thing that, I mean, in my sort of previous life with some of the scanning tools that I've been involved with, I always used to say, the one thing that you can do is scan your images for vulnerabilities because at least you might find some low-hanging fruit that way. And I think we have a tendency to think about what's the cutting edge, what's the latest developments that we have in industry. And the reality is a lot of enterprises are still kind of running on outdated kernel distributions or other outdated software and really just getting a sense of whether you're running kind of modern code would be a really good starting point for a lot of enterprises. Sounds good. And Josh, I know you've built and managed security teams and you're doing that again now at Anchor. Josh Bessers, by the way, gonna reintroduce people. So, but you're also a student of history on this topic. And I'm curious what you're seeing repeat itself. And so like myth versus reality, you already touched on some of the craziness that's been happening, but is it real, all the attacks we hear about, all of the fear and the dread that is getting put into our minds? I mean, we exist in this industry of security that it feeds a negative energy, right? Like if you look at a lot of the news stories we see when Log for Shell was happening, it was all about the world is ending, holy cow, look at this, it's not like holy cow open sources everywhere, wasn't that neat? And I think we are now at a space where more and more people are paying attention than ever before. I mean, Liz was just saying about how things like running up-to-date software is important and a lot of people aren't doing that. Like there's all this boring stuff we never talk about. You're not gonna get a talk accepted at a conference that's like how to update your Linux kernel, but that is literally vastly more important than any of the topics you're probably going to hear about. They're new and exciting, but the boring stuff matters. And I think this is true of history and the one piece I would take away from this. So I have a podcast called the open source security podcast. Well, I kind of have two of one. The other one's called Hacker History. It's very fun. But I recently spoke with one of the authors of the Rust book and she was talking about railroad safety, okay? And moving from manual brakes, the reason trains used to have a lot of people on them is human beings would literally like run around on the train and twist the wheels to stop the train, right? And it was incredibly unsafe and tons of people got killed. It took them 80 years to go from manual brakes to air brakes and air brakes are incredibly safe. People aren't running around on a train falling off and things like that. 80 years, how old is our industry? Not very old. And so sometimes it becomes very frustrating to think like we're not making progress fast enough. I am guilty as anyone of complaining about how slow I think things are. But when you look at the context of other human safety measures, we've got like 50 more years to go. So don't lose hope, things are getting better, but it takes time. And I think you made the point yesterday that the vast majority of things that we do with software do kind of work. Like, you know, most of us still keep our money in our bank. That's right, I do not keep my money buried in the yard. That's right, it's at a bank and it goes pretty well. As a society, we are definitely like, if we claim that we're at the edge of collapse, that'd be just fear-mongering. We've been claiming that for hundreds of years. But I think also there's that other side of the coin where we need to remain vigilant and agile and efficient thinking about these things, probably ahead of time, especially things are, like when the pace of things are so huge, especially like yesterday that was news about from open AI that they said, they're right now using GPT-4 to understand GPT-2. So AI is being used to understand itself going forward, which will probably be more explosive going forward in terms of the scale of opportunities, but as well as the risks of understanding them and being intentional about how we approach them are still important. For sure. Yeah, we were pretty good. We were at least 15 minutes in before we mentioned generative AI in Chagy-BT. That's probably a record for talks here. But having said that, one of the keynotes this morning was really interesting. And they were talking about generative AI and how most of the work and the modules are being built by larger companies just because of the amount of resources that it takes to run these. But that smaller companies are starting to get involved. And I'm wondering, does that scare us? Is that making it more secure, less secure when the smaller companies get involved? How much should this be regulated? Should the government step in? So what are you thinking about generative AI and security around that? In containers or not in containers? I have personally only dabbled with Chagy-BT. I'm sure like most people played with it and asked it questions about things that I know about and received some pretty convincing and yet entirely wrong answers. And that makes me pretty sure that we're gonna see quite a lot of, for example, functional but insecure code generated. I do think that it's gonna be part of a really interesting arms race, if you like. Software that's generated by generative AI and exploits that are found by generative AI that will kind of increase the pace of that arms race that already exists with humans. Now it will be just faster. And I'm sure we're gonna see some vulnerabilities discovered that are hallucinated just aren't really vulnerabilities at all. So we'll all have to, not just in security, in life in general, we're gonna have to be very careful about what we believe and how we verify some of the information that we're getting from these tools. Any other thoughts? I'm completely unqualified to answer this question, but you are not. So yeah, my career, I have spent the last 10, 15 years of my career in the intersection of data science and cyber security. And I do think that things have changed, the rules of the game might have changed in the last couple months. I was at RSA last week, the week before, and the enterprise security teams, they are not thrilled about the fact that anybody who can prompt can write code going forward. And we were just joking, yesterday said they're probably not thrilled about their developers writing code anyways. But that'll be more code than ever before. And there's a dual nature to this, right? So your marketing person, your art history student, creating code, a lot of opportunities, but it also presents lots of risks. There's also the security researcher armed with the augmented with a digital mind deployed into the systems so that they can find the needle in the haystack faster. But there's also the other side of the coin, the bad actor writing more robust, malicious code, weaponizing AI. So they are perfect. If you're talking about AI, and we can't put a thing on our slide here, perfect. Bob, I feel better about myself already. Thank you, Lisa. Okay, apparently I'm supposed to be on the Opposite Summit live on stage right now. Sorry about that. I thought when you go to slide show mode, it doesn't put the alerts up. Hopefully there'll be nothing terrible that shows up there. But okay, so Slim, and well now you know who we all are. I can stop introducing people. But Slim, Encore, and Isovalent are all containers, mostly container technology. So let's just for one minute, even though it's not, we're supposed to talk about containers. I can see ships full of them out there. It's a great place to have a conference talking about containers. But what about non-containers? Is there any hope for our VM-bound cousins? I think as soon as we talk about containers and container security, we have the question of are containers a security boundary? And there's a sort of truism that says, no, they're not a security boundary. And containers are just processes really. I slightly disagree with that, because there are things that you can do to somewhat harden the boundary around a container. But essentially, containers, not containers, VMs, bare metal, it's all running basically the same software. And those vulnerabilities exist. They might be slightly easier to exploit in some container environments, and they might be, it might be no different. I don't really think, I don't know, does the data bear out any relationship between software being more vulnerable if it's running a container? Again, it goes back to is open source more vulnerable to supply chain and tax type of question. There's no true, like there's no, it's not a binary thing. It very, I hate to say this as a data center, but it depends on who is containers that we are talking about. I can talk about top publicly available containers that are like an open source and everybody is using and how their security posture has been changing. But we are talking about just in Docker alone, I believe we are talking about 10 million containers, right? So being able to generalizing these things would be oversimplifying the issue. So there's actually some work happening in the ESPOM universe to kind of address some of this. Is there are multiple stages for any application. There's development, there's build, there's deploy. And if you deploy on a virtual machine, for example, now you can talk about this thing that we're calling runtime ESPOMs where we look at everything in the virtual machine, like what are all the processes running? What is happening in the system? And so this is being thought about like VMs aren't being abandoned by any means and goodness knows they'll outlive all of us. But there's definitely some work and thought going into this because it's a real environment many, many people have and we would ignore it at our peril. And so I'm really, really happy that the folks at CISA are thinking about this stuff. It's great. And since we're at an open source conference and I'm sure there's a lot of maintainers here, I wanted to ask, okay, so are we already asking too much of open source maintainers? Like what security burdens should we be putting on the maintainers? And does it matter if it's, we're talking about a large project like Kubernetes or maybe one of the smaller ones? So what should be the expectation here? So I think there's a general question there about the expectations that we put on maintainers and frankly, we shouldn't make any expectations on maintainers for open source. They might have expectations put on them because they're paid by a company and there is some commercial relationship around the work they're doing. But purely the fact that you're somebody who writes code and contributes it, puts it on GitHub or whatever, that does not imply that you have a absolute responsibility to do anything at all. So I think we need to be a little bit careful about when we talk about requiring maintainers to do things. But we all also operate in this context where open source is being used in commercial environments and people are being paid salaries to do this kind of maintenance and other work. I don't think there's ever been more tooling available to help maintainers and I'm sure it will continue to progress. So things like the dependable things that will tell you about vulnerabilities in your repositories, that's so much better than a few years ago. We're making life, we're offering these tools that people can use to improve the security posture of their projects and then things like the CNCF or other foundations can provide guidance and maybe have certain criteria that they expect from projects as they're becoming more mature that can help people do best practices and create more secure software and then people will have more confidence in running it. Maybe to add to that, majority of the software that we put out there, including open source, but maybe specifically open source has not been reviewed from a security perspective and I can, let me drop in a couple numbers as it relates to publicly available containers, for example, we have looked into about 55,000 packages out there and of these packages, 1% of the packages explain 25% of all CVEs that we fund. 86% of the packages and these are packages in images that has been pulled in billions of times, so a lot of eyes are on these packages already. 86% of the packages have zero vulnerabilities. The more... Zero, no... Zero, no vulnerabilities, CVEs. Identified CVEs is the right terminology here. So when you look into packages, the more popular packages, the likelihood that it harbors more CVEs increases. I call it the popularity paradox. The title would be something like the most popular packages present the biggest risk to your company, or today, really. The 86% of the packages that you're shipping into your production, thinking that there's zero vulnerabilities, zero CVEs, it's... You might be as an organization walking into a trap thinking with this mindset. There's this SLOs in zero vulnerabilities and all that race, which is a baseline to be compliant. Governments asking for, like companies asking for zero vulnerability is totally understandable, but thinking that when you ship redundant excessive code to your production environment and saying it's zero vulnerable, like zero vulnerabilities are in this container, in this application, it's through the wrong mindset. And I said at the very beginning, we are trying to navigate all this complexity through manual processes. We have run this global randomized survey asking questions to developers and DevOps engineers. And they said, majority of them said, the way that they remediate vulnerabilities in their containers, the way that they slim and harden their containers are, like majority of them said, it's manual processes, no automation. Going back to the maintainers, all that work, the sheer amount of code, even without AI generating anything for us, then the CVs, like how the number of CVs are changing over time, it almost put out like very little compared to the amount of code that is coming in. It is humanly not possible for us to do this alone. We just need to augment ourselves. And that's why I'm cautiously optimistic about generative AI because I can see how it can create, it can start creating a ton of new CVs for us, finding these blind spots in the software systems, but it also can help us, maybe not right now, but in the next generation, that it can help us find these vulnerabilities faster, it will automate patching faster so that humans are not overwhelmed with this sheer amount of work here. I have two things I wanna add very quickly. First of all, open source owes us nothing, and I think we can never forget that. And secondly, the Atlantic Council released a paper a couple months ago. They compare the open source community to water, like the water system. And it's very good, go read it, I won't dwell on it, but I think they capture very well the kind of ideas and how open source exists within the larger universe. Okay, I will take some questions from the audience. I have a bunch of questions that I crowdsourced earlier from Twitter, but I just wanna see Shove Hands, who is working on a project as a maintainer or contributor to one of the open source security projects right now? Which project? Turn? Who else? Which project? As well? Turn and S-bombs? Okay, other maintainers or contributors? Yep, which one? Sixth store. Sixth store? Cool, cool. Okay, so yeah, we'll take a few questions from the audience. If you speak into the mic, everybody virtually can hear you too. Do you think that there's an over-fixation on zero CVEs in container images? Like I know VMware has a whole team that analyzes like thousands of CVEs and it's like something like 8% are actually exploitable and scanners are like very apt. You know, we talked about like how doomsday headlines will get you like hype. And so like that's kind of, I feel like what scanners do, they're like, look at all these CVEs that you have. But like, do you think that, then if you're like, okay, I need to fix all these CVEs, you also have like integration risk and like retesting and like a lot of overhead around that is there. I'm just curious what your opinions are on the over-fixation. I completely agree with that for two reasons. So one is, yeah, just because you found a vulnerability, particularly if it's a low or medium severity vulnerability, they will not be exploitable and can be safely ignored. But the other thing that I think plays into this is the checkbox mentality that people think, oh, I've passed whatever criteria I need for however many vulnerabilities or whatever severity we're gonna accept. And therefore I don't need to think about it. And you know, everything's fine, it's all secure. And I think there is a problem in security in general that people think about the checkbox, they don't think about the actual reality of what can happen if you run that software. Fun question, thank you, Rose. We have another question in the back. I hope the microphone can go everywhere. Can you hear me okay? Hi, I'm Justin Murphy, I hesitate to say this, but I work for the US government at CISA. But one thing, Josh, especially something that you said about maybe it would be more beneficial to come to a conference like this and hear a talk about how to update your Linux kernel than some of the flashy topics. And so what are some things that, I'd love to hear from all of you, what are some things, I feel like what I kind of, I'm not hearing you saying, yet hearing you saying is we need a change in mindset. Even the previous question about the idea of zero CVEs, right? What are some simple starting points? I come from the US government. What are some things that the US government can help to try to change the mindset around some of the ways that we can approach this from like a simple starting point? Are there things that are already happening that we should be paying attention to, things like that? I'm having to jump in on this. So the US government I think is doing some good work. I think a lot of governments are now. I mean, obviously the investment CISA is doing around their like known exploited vulnerability list. You've got the S-bomb work that's going on. There's VEX that just came out. There's a ton going on. And I think fundamentally at the end of the day, the moral responsibility of the governments of the world is going to be to use their purse strings to enforce good behavior. Because that's really one of the things the government can truly do is they can say like, these are the expectations we have and we're just not going to buy your products if you don't do this. And I think historically you always have industry gnashing and wailing that, oh, we can't possibly do that. We can't put airbags in all our cars. That'll bankrupt us. And yet somehow they all survived. And so I think it's going to be very similar in security. And I think the governments are on the right track. I'm not saying they're going to stay on the right track forever, but at the moment I feel like we're making positive progress. So see, it's not all doom and gloom, right? We get to end. There's some positive in there, right? That was a very small amount of not doom and gloom from you. Thank you, Josh. Any other thoughts on this? I think just adding into that, making sure that when we're talking about open source software, the kind of legislation around it understands what open source is really about. There was certainly some flaws in the kind of EU directive around security where open source says there's nothing. You cannot put requirements on open source software. You can put requirements on the commercially derived products that are built on top of open source. So I think we just have to be a little bit careful when we're asking the legislators to come up with these rules and come up with these guidelines, but that they also understand the open source context in which that operates. And maybe just to combine those two questions, this shared focusing on decreasing the CVE count versus really truly understanding more sophisticated perspective on what's in my software? What am I pushing? Understanding what you're shipping into production, only pushing what's needed. Everything that's redundant, not core to the product, as opposed to looking into zero vulnerabilities, looking into everything that's redundant is going to be arched and we are going to be knowing. And it is easier said than done. But knowing what's inside your container, your open source software, the piece that you're using, it's extremely important, sounds simple, but unfortunately it's not easy to do. I think we have time for one more question. Can we take one more? Do we? OK, hang on. Of course they're all in the back, right? I know, right? Heat for God of the day. I thought the shy people sit in the back about the real question. The late people too. OK, soon. Thank you. My question is for Liz and Josh. Both of you alluded to this concept of runtime observability and firewalling. Could you speak a little bit more about what you meant and are the examples of projects that do a good job of doing that today? Yeah, so if I draw a parallel with networking and we all expect firewalls that will drop packets that go to unexpected destinations or come from unexpected destinations, there's no kind of, I'm going to send an alert into a sim to tell people that I got a funny packet. We just dropped the packets. I think that the reason why we don't do that so much for other aspects of executing software, so things like what files are we accessing, is because the level of abstraction has historically been too low, too detailed. So we have things like Setcomp and Appalmer and SE Linux. And they're really hard. They're really hard to get right. And people are concerned that they avoid putting them into enforcement mode and preventing out of policy access. Because the protocol might be wrong, and then it might stop the software from doing what it was supposed to do in the first place. I think we need better, higher level descriptions of developer intent so that we can say things like, yeah. And this should be easier in containers where we have microservices. It should be easy to say, this microservice does this one thing, so it only accesses these files and it only needs these permissions. And anything outside of that boundary, we should just prevent it like we would prevent unexpected network access. So I think I mentioned Cili and Tetragon before. And it's really good at the enforcement side. I don't think today we have the right abstraction for describing the policies that it enforces. I think that's going to be the game changer that moves us to doing effective runtime security. Yeah, I agree. And I think the other half of that is we need the ability to observe what's going on so we can make those policies. Because I have a history at Red Hat where SE Linux was obviously the thing. And there was basically a tool. You would run the tool, run your application, and then it would look at all of the SE Linux failures and then build you a policy. But you can imagine if you have something that's already been polled or vulnerable, you're also going to allow the bad behavior then as part of your SE Linux policy. So it's really hard to do. But I feel like we are pointing in the right direction. I'm hopeful, I'm hopeful that we'll see progress in the next couple of years that lets us do all this. Sounds good. Okay, if you have any other questions, this is how to find us on Twitter. We will be answering those. And Master Don. And Master Don, thank you. Josh. And Blue Sky. But no, I don't have any invite codes. I haven't quite figured out the Master Don thing yet. But I will be at the CNCF booth later since I'm a CNCF ambassador. Booth's duty is part of our charter. And we'd love to keep the conversation going. You saw other maintainers and contributors raise their hand. There's lots of cool projects going on in this space. I want one of those plushy bees, by the way. I want a bee. But give our amazing panelists a hand for the work they do and everything they just shared with us. Thank you.