 Hello, hello, welcome back to theCUBE. We're here at Open Source Summit in Vancouver. No, don't switch that dial, you're not on the wrong channel. I have been, John Furrier has been kind enough to step aside and let me guest host this one, but don't worry, he's right there, making sure we all stay on our best behavior. It's also Haitian Heritage Month, so it's only fitting to have a Haitian-American guest star. So guest host, I should say. I am with the most amazing group of panelists. We are going to tease our panel a little bit, which is tomorrow at 11.30, I believe, here at the Open Source Summit in Vancouver, and we're going to be talking about Open Source security, our containers, the biggest blind spot. Open Source supply chain security, our containers the biggest blind spot. So that's the talk tomorrow. I'm here with three absolute security experts and open source experts, so I'm going to let them introduce themselves. Liz, why don't you start? Hi, Lisa. Yeah, I'm Liz Rice. I am Chief Open Source Officer at ISOvalent, which is the company that created the Cillian project. Okay, Ayesha? This is Ayesha Kaya, and I am the VP of Strategic Insights and Analytics at Slim.AI, a container optimization and intelligence platform. And Josh. And I'm Josh Bressers, the Vice President of Security at a company called Angkor. We do the sifting gripe Open Source scanners, and we like to call ourselves Next Generation SCA. It's a very exciting space, all the supply chain and open source. And supply chain security has been a hot topic. There was a keynote about it this morning. I don't know if you all caught that, the final keynote of the day. So this is a really hot topic. So if we want to, you know, one teaser of one problem that we want to tackle for tomorrow, where would you start? No, I'm putting everybody on this slide. That's where it really is here. Okay, so in your opinion, what's the weakest link, would you say? Because we're only as strong as the weakest link, right? Something we were just talking about before we came on to this segment, the sheer amount of software that is out there, 29 million packages in no delay. Yes, that was when I did the research a couple of months ago. So it's no doubt well past 30 million now. And that's a heck of a lot of software that may well be carrying vulnerabilities into everybody's deployments. And that's just one programming language. So I think there is the just sheer breadth of possible vulnerable software that's only going to be added to with AI generated software as well. So that's where I think perhaps not today, but tomorrow that's going to be the biggest weak link in my mind. Okay, perfect. And Aisha, what are you excited to talk about? Definitely think that there's this point that'll be no code than ever before with the AI generation coming in. But I also think that the unchartered territory when it comes to vulnerabilities in these software packages that we have, that's also a big blind spot that we have. Because it's a very human focused activity. The way we find these vulnerabilities as a security researcher comes in and does vulnerability research. And the sheer amount of code coupled with the fact that there is not enough security researchers, security reviews on these open source packages and containers and in general out there for all the software that we put out there is going to be, is a big, big weak spot I think. Okay, perfect, Josh. Absolutely. And I'll kind of take that and we can go even farther. So I love data. Any time I can, I take data and I see what I can do with that I make silly graphs and analyze it. And all of the CVE data that exists today I've taken and shoved it into Elastic Search and looked at it. And it's kind of plateaued around 20,000-ish vulnerabilities per year. 25,000 last year, right. And think about that. We just said there's 30 million NPM packages and 25,000 vulnerabilities that were published last year. That feels like it's off, it's missing some zeros is the way I think of it. And this is terrifying in a way because we already struggle to handle 25,000 vulnerabilities a year when it should probably be maybe 250,000, maybe 2 million vulnerabilities a year. And that is terrifying because right now there's absolutely no way the industry could handle that even if we wanted to. I think that's the other real weak link is once you've found a vulnerability how do you apply the patches, the upgrades, keeping your software up to date. Everybody knows that it's good hygiene to do it but the reality of whether people are actually doing it, very different story. I mean in publicly available containers alone, right, we looked at the top images and when you look into the vulnerabilities that are introduced into the system and what happened six months later, about 20% of the vulnerabilities those CVs are resolved. 80% of those vulnerabilities that we see stay in these containers longer than 180 days. So our resolution to rupture and repair cycles are extremely slow. So yes, the problem is not big enough. We don't have an army of security researchers, biological and non-biological entities looking our code but even right now it seems to be very challenging. The problem seems to be very challenging. You're not coping up with the current issues at human scale, let alone the AI generated code in CVs. But now, here's the question I always have to ask. It sounds dire. We just spouted off a lot of terrifying statistics yet I can still log into my bank and pay my bills and I can have my dog food at my house tomorrow. So is it really that bad? I mean, that's what I struggle with always internally is it feels really bad but society functions. And I know we're going to unpack this question a lot more tomorrow but we're here in Vancouver and I'm watching the container ships go back and forth so I have to ask about containers. And I know Liz, you've raised this question before. Are applications inherently more or less secure in containers? The software itself is the same. I think there's always this, let's say, misunderstanding about how isolated containers are from each other and from the underlying system and the answer is not very, you know? So there's a little bit of a, let's say, catchphrase of containers are not a security boundary. I actually think they are a security boundary but you have to be very careful about how you can enforce that. So there are tools that you could use to help you enforce a stronger security of boundary whether that's your container runtime or like runtime tools that are essentially ensuring that your behavior of that container sits within a certain profile. But without that, containers are not as isolated from each other as virtual machines are or as, you know, full bare metal machines are. So there's always going to be a bigger security challenge. Yeah. And I know John will be disappointed in me if I don't bring up one of his favorite topics, generative AI and always a big chat GPT fan. And so how does, and I know you're AI ML expert as well and data scientist. So in terms of security, like what extra security risks or how are we dealing with security when we're adding in AI and ML? So first of all, last 15 years of my career has been an intersection of data science and cyber security. And never in my career I have seen a technology that was born in the West Coast that division to rural India, rural Turkey so far. So it's very impressive. And I'm also an AI ethics advisory board member at Northeastern University. So we've been talking about the generative AI and its implications from multiple perspectives for a while right now. The implications on security is going to be interesting because there are so many different dynamics, system dynamics of this issue here. So we talked about AI generated code. Yesterday there was this announcement from OpenAI. They were saying that they're using GPT-4 to understand GPT-2. So AI is beginning to try to understand itself. So there's that explosive nature of new code that's being introduced. The amount of code that there'll be no code, more code than ever before. So business, marketing, all sorts of departments are empowered to generate code. But I was a speaker at RSA last week but I'll say that the enterprise security teams are not that trilled about everybody writing code, especially those people who are not very familiar with the security implications of the code. I think some enterprise security teams would prefer their own developers, didn't they? I would totally agree with that. And there's like, you know, we can think of it as a security researcher being augmented by generative AI so that finding the needle in the haystack is easier. But you also can think of the other side. Bad actors creating more robust, malicious code, weaponizing AI. So so many different dynamics. And for us, I think the ultimate, I can talk about this topic for hours, but the bottom line here is that it is a time that we need to be very deliberate and intentional about our next steps because we are going to be empowered like never before. There's a ton, we will be doing a lot more with less but the question is, are we ready for it? Yeah, and I don't know if you caught Eric Brewer's keynote this morning. He was talking about automation and, you know, curation, especially when you also talked about supply chain security and S-bonds. So I know this is something, you know, something about. And I saw a question you were talking about. Are containers just somebody else's computer? Probably. I mean, this harkens back to what Liz just talked about. What is the security boundary around containers? And I think how many of us have just pulled random things from Docker Hub with zero insight into what was even in it or what it really did and we ran it on our laptops or our servers or whatever. And so it's software someone else put together. So I mean, is it someone else's computer? If you think of it in that context, it really is. Someone else built this thing and then we're just taking it and running it with almost no regard for what it actually is, which is honestly true of a lot of open source, where it's something we think it solves our problem. So we download it, we run it, and we see what it does, which is awesome and terrifying at the same time. And we all think that somebody else is looking at it for security issues. That's right. Yeah. Well, we know they aren't, but everyone else thinks they are. But maybe now we'll be able to have like AI looking at all of this open source software. So at least somebody will be looking or something. But do you think that 2022 has changed this dynamic a bit, John? Because in the aftermath of multiple security incidents, people are no longer looking at containers as these building blocks, these atomic units of just shipping card. There's a lot more to it than being that black box, simple black box. I feel like yes. And I see this professionally with the prospects we have reaching out to the company. I see it working with the open SSF. I see it in many places. And I blame log for J for this. And I know log for J gets brought up in every talk at something like this. But what I think happened and what I saw was a lot of organizations went, they said, do we have log for J? They honestly didn't know and they started looking and they not only found log for J, but they found open source everywhere they looked. Everywhere you look, holy cow, there's more open source, there's more open source. And so I think that changed a lot of conversations into like what are we running? We have no idea, we need to figure this out because this is a big deal. And I know many organizations I talked to, they were measuring the time to just find log for J in three plus months from the time log for J began. Not, they had no idea what remediation was going to look like. So I mean, you're probably talking years in many instances, which is kind of terrifying as a security person. And since we're at open source summit, maybe we should talk about some open source projects. I saw your target at KubeCon a couple of weeks ago, it was awesome, the on EBPF, I know you gave five talks, but I know you've written two books on EBPF and you talked about the Cillium project, really cool project, you want to give us an overview? Yeah, so Cillium is probably best known for Kubernetes networking and it's built on EBPF, which is this incredible kernel technology that allows us to instrument the kernel, change the way the kernel behaves, and for networking it allows us to create very efficient paths for network packets to follow within the system and also to build in network policy so we can drop packets that are outside of a security policy. Cillium's also actually being used in quite a few non-Kubinetti's environments as well, so some load balancing applications. Also for connecting Kubernetes deployments with legacy or on-prem services, I think multi-cloud has been quite a hot topic this year and it's certainly something that we're seeing a lot of user demand and customer demand for this ability to integrate the Kubernetes cloud native services with these pre-existing, maybe on-prem services that they're still using and that are still very functional. Okay, and thank you, and so I have a question for Aisha. The, I know Slim, AI, and iSolvent and Core are also mostly container tech, but what about non-container open source security? Since we probably won't go into that tomorrow in the panel, what can we say about that today? I mean in general, I think open source containers and the rest, it's the vast varied and complicated landscape. And from a security perspective, for containers, we are seeing that after all the security, after 2022 being the year of software supply chain security and the aftermath of certain security incidents, the vulnerabilities and the complexity, the component complexity has been under rise. So there's a lot of effort, but it doesn't seem to be resulting in a lot of improvements in the core that we see. And it applies to other areas as well. So in general, my take is that open source containers, the software that we use, all the other technologies, they're great for agility, efficiency, speed, but there are areas that are ripe for innovation, especially when it comes to vulnerability remediation. And I'm just so hopeful that we get empowered by Generative AI to do the human scale activities that we have been doing in unimaginable, previously unimaginable speeds so that we can tackle the problems that are ahead. Okay. I will add to that something the OpenSSF is working on, which is a group I'm a part of, they're part of the Linux Foundation. So they're here, there's OpenSSF days going on right now as we speak upstairs, but they have a project they call Alpha and Omega. And Alpha and Omega are, Alpha is taking like the top 100 open source projects. Now just figuring out what the top 100 projects are is a bit of an argument in itself, but they're taking kind of the very popular open source and they're sending humans to them and having like real security researcher humans help these projects, be it code audits or threat modeling, whatever needs to be done and helping improve their security posture. But then the Omega project is meant to take thousands of open source projects and start asking questions like can we deploy AI to help us understand this? Can we deploy automated scanning to understand what's happening in these projects and really trying to take that bigger picture view. And obviously if we can do it for a couple of thousand, we can do it for a couple of million is the idea. And so it's a really interesting project. And in fact, there was just in the OpenSSF keynote, Microsoft and Google had both pledged $2.5 million towards Alpha and Omega to help further the project. So like it's a big deal and it's really cool and it's definitely something to watch. Okay, I also just realized I didn't actually introduce myself, I'm Lisa Renanthi. I'm a developer relations at Cockroach Labs and Cockroach Labs is the company behind Cockroach TV. And we're all about meeting developers where they are, giving you that flexibility and operational control. And we have a new release coming out next week that's going to address a lot of these security concerns. So that flexibility and that control is I think huge especially when you're talking about security. So in case anyone was wondering why a database company is up here talking about security other than the fact that I wanted to spend time with three of the most fabulous people on the planet. There is a tie into almost every technology and everybody is concerned about security, right? All right, we've got like a minute left. Who wants to, who wants the final word? Crowdsourcing this out. I'm going to look at Ayesha because I know you always have something amazingly interesting to say. Oh, thank you. Liz and Josh on my side. I'll just say that OpenSource and containers are just like simply amazing. And I think there's a ton of opportunity for us to grab. I'm cautiously optimistic when it comes to generative AI and its implications for cybersecurity. But again, we need to be very deliberate with the next steps. And this is the right crowd. Like we are stronger together and the brilliance of the people that are in this conference is just humbling. That's very, very true. Okay, so from that, since I've, this is the first time I've actually hosted theCUBE. I've been on it a few times, but I think we're supposed to say something like, thank you very much. We are coming to you live from Vancouver, OpenSource Summit, and this is theCUBE. Thanks, John, for letting me fill your seat for a hot minute. Thanks, everyone.