 Welcome back everyone to theCUBE's live coverage here in Vancouver, British Columbia for open source summit 22AM. I'm John Furrier, host of theCUBE. Rob Streche, my co-host. This next three days of wall-to-wall coverage going to be asking all the smartest people in the industry what they think of what's going on in open source. AI, security, it's all happening. Open source is now the standard. The software industry for proprietary is gone. Open has won, it keeps winning. But there's a lot of threats with AI and security but they're working on it. Our next guest is Danence. VP of product security at Red Hat is here on theCUBE. Vincent, thanks for coming on. Thanks for having me. So, obviously we love open source. Been there from day one. It's wins, never bet against open. So, site that's kind of won and winning. However, with the wave of AI coming, it's a tornado that could become a massive velocity of more code, I call it code pollution potentially. But it's also an opportunity for the industry to kind of get inside the tornado, so to speak and make a real impact and make change. With AI comes data, data comes security. AI and security, the two hottest areas in open source right now besides the fact that cloud native is expanding significantly. What's your take on that going on? Cause you're at Red Hat, you're in the action. You've got cloud on premise edge. You guys are everywhere. What's your view on AI and security right now in the open source community? Well, I mean, the biggest thing is probably the privacy, right? I mean, I think it was just last week or the week before data got leaked through some AI tools. So, I mean, privacy is a big security concern, obviously. There's ethics and morality around it as well. I mean, if I look at it from a product security side of things, it's the AI generated code. Does the AI generate bad code? Does it generate good code? Can you trust it? I know there's some tools out there. They're doing some AI around security information. If it's like, where's the data sourced from? Is it good data that is sourcing it from to make good conclusions? Or are we just going to start blindly trusting it and then make bad decisions based off of potentially bad data because some AI told us to do it? It's almost like we need a security seal of approval kind of thing. But Rob, on our podcast, we do our new cube pod we're on our 10th episode every week now. But last week, David and I debated heavily around AI regulations. Of course, we're anti-regulation. We don't want the government meddling in these early stages. That's by personal opinion. We're totally against the FTC and LinaCon on that piece. However, the innovation has to be solved with all these problems. How do you look at the security piece in software, software supply chain, developer product use out on all time high right now is only getting better. How do you balance and not slow that down and drag that being anchor? We don't want to drag the productivity down. But we also want to accelerate and level up security as all this change is happening. What is this software supply chain challenges that you see right now in security? And how do we maintain developer dominance? Well, a lot of it is like you said, the over-regulating certain things. I mean, there's a certain amount of safety that's required, right? You get safety regulations like vehicles and whatnot. I mean, there is a certain aspect of that that needs to be there. But over-pivoting to that, as we're seeing in some governments, they're trying to push a little too hard, I think. And at the end of the day, that's going to either cause people to step away from open source. I don't want to contribute to this anymore because now it's gotten scary. Or they're just going to be blocking out certain geographies or certain countries. And so we really have to pay attention to the regulation part because if you make a lot of this stuff too heavy-handed, that's just going to slow the developers down. So I mean, you talked about the pace of innovation and all of these things, like you over-regulate things, it's going to slow everything else down. It'll stunt the growth at birth. Absolutely. I would have to ask you specifically, what are the conversations around the word, I hate the word regular, but in the area of compliance and secure confidence, we want people to be enthusiastic, which they are, check highly enthusiastic hype cycle, but confidence comes down to making sure it's working. What are some of the conversations in the regulatory conversation that are being had right now that are real? And which ones should be dismissed? If you can go there. Can we say they should all be dismissed? Okay, have fun with that. No, no, because I'm with you. Like over-regulating is bad. But I do think like, if you look at the European Cyber Resiliency Act, I mean, there's a lot of talk there about where does liability land? And that's a really scary place, I think for people right now, because if I'm a contributor to Open Source, and I want to submit some code, submit a, I found a bug, I want to submit a patch for it. If I can be held personally liable for that, due to some regulation, I might not want to do that. Or even companies being not allowing their code to be contributed back. I think, is that being taken up at this conference and around and meetups on this stuff? Yeah, I know that there's a lot of people in the industry and Open Source communities that are talking about this, doing some lobbying to try to be like, hey, can we dial this back a little bit? Because right now, the way that we're pressing on this, I'm concerned that they're going to make decisions before the full ramifications are realized. A term that I heard a couple of weeks ago talking with somebody about this was, like how do we make sure they don't kill the golden goose? Because of all the benefits that Open Source have provided at large to the planet, if you take some of that away, you slow down that innovation, you over-regulate, or you make people scared to contribute because now, all of a sudden, my personal finances are at stake, right? You slow all of that down and all of the great gains that we've made, we start to lose some of that. Well, Vincent, you mentioned that. This is a huge point. I remember the days, Rob, you too, too, probably, when in Open Source, that security question came up. I would go back maybe 20 years ago when it was really surfaced because the proprietary guys were getting threatened by Open Source. It's near not secure. So there's always that fun, fear and certainty and doubt around security. And now, Open Source is more secure than ever and everyone's saying, no, no, the more transparent you are, the more open it is, it's the better security. That's the premise. How does that apply today? How do you make that argument today? Because I believe the same is true now. Let the innovation, let the developers and the industry solve the problems in the spirit of transparency. So how do you do that with AI and the new student wave code coming in? What's an approach? Yeah, I mean, the transparency is the key part for sure, right? That's one of the great benefits of Open Source. If you're looking at just some simple metrics, right? So if I look at our Red Hat Product Security Risk Report that we put out last year, we've been tracking some of the exploited vulnerabilities, right? Those vulnerabilities from Sysa, their known exploited vulnerability list, predominantly proprietary software that are being known exploited out in the wild, right? Fairly small amount of Open Source. The ones that are being exploited out there are more that your browser base things, like your Firefox, your Chrome, things like that, right? So if you look at actual exploitation of Open Source, it's really, really low compared to proprietary software. Part of that I suspect is the transparency, right? It is the ability to deliver fixes at scale and at speed because you have a lot more people involved, a lot more people looking. And so the notion of Open Source is not trusted or untrustworthy, I take great offense to that because I think it actually is. And I think that the Open Source communities take it, they take that responsibility very seriously and they actively work to try to minimize the risk of using Open Source. And oftentimes, they're not getting paid to do it, right? This is just the common good that we're accustomed to in Open Source. You know, one of the things Rob and I were talking about on our intro is the software supply chain, developer productivity angle, S-bombs, software bill of materials, as it's known, barely prominent in the CNCF and KubeCon conversations with the EU, North America's coming up in November in Chicago, certainly the center of the conversation. Brings up the, and you also got the Open Software, Secure Software Foundation as well, a lot of us, and then you got the Cloud Native Security Conference. The security seems to be breaking out in growth. So security and data are super important, so AI and secure, again, the top things. S-bombs are super important, people make them feel like it's a big part of it. Is it the silver bullet in security for Cloud Native or is it just more of a comfort blanket for people right now? I mean, because there's been conversations around relevance of S-bombs versus it's all hype, it's not that secure or the role plays. There's been a wide-ranging conversation, what's your take about S-bombs' self-rebuildments? Are they a silver bullet? They're not a silver bullet, right? And I think there's a lot of misunderstanding as to what an S-bomb would actually provide. I'm concerned and suspect that a lot of people think that an S-bomb is a checkbox to check, right? I have my S-bomb, I'm good, no worries, right? An S-bomb is literally that list of ingredients that's in whatever software that you're using. What's not included there is the recall list, right? So you go to a grocery store and there's some bad food sitting on the shelf, right? I can see the list of ingredients. I don't know from that list that it's bad. I have to look at a different source of recall list to know I either shouldn't buy that or I should throw it away if I already did, right? And so we kind of look at it, there's the vulnerability exchange. VEX is one source of vulnerability information. Other vendors provide other sources of vulnerability information. You marry that with your bill of materials and now you have a better picture of what it is that you're actually looking at in terms of software that you have installed, vulnerabilities that may or may not be present. I think that a particular use case for S-bombs would be the vulnerability scanners, because right now they're doing a lot of guessing, right? They're scanning a container, a system, virtual machine, whatever, and they're guessing as to what the content is and they're guessing as to what it is that they can do with it, right? And what vulnerabilities may or may not be present. We actually feel a lot of that pain at Red Hat, these vulnerability scanners, right? Now if they started using an S-bomb, they could get a better picture of the actual software installed if they use, say, a vendor's vulnerability information, they get actually a fairly true picture of software and vulnerabilities. How do you marry that together? Can you take us through the process, how it works? How do you marry the S-bomb with the vulnerability? Because everyone has scanners, they're scanning stuff. How do you marry that together? What's the process? What's the best practice? Well, the best practice with the best uptick would be those scanning vendors actually using the S-bombs that a vendor provides. Then it makes it easy. I can still use the same tool. The tool has been updated to use a proper set of information. It's still maybe using the proper set of vulnerability information. And then there's no cost to me or no effort on my part, right? When you're looking at other tools, I don't know that there's a lot. A lot of the focus the last year or so has been on creating the S-bombs. We're not doing a really good job of talking about what do we do with them once we've gotten it, right? And so I think there's still tooling that needs to be created to actually, like I said, marry those two types of information or even actually divine some of the information out of an S-bomb. Like if you're looking for a manifest of licenses that are included, if you're concerned about licenses, right? The actual versions of software, the types of software, like there's a ton of potential for these S-bombs that people aren't really talking a lot about. And I'm hoping that that starts to shift once we start realizing more S-bombs are created and we're like, great, we have this thing. What do we do with it? I think the conversation is going to shift into like, there's a lot of really cool ideas and things that we can do with them now that we have them. Right. And I think the question that keeps coming back to my mind and it goes back to where is the code coming from and how does it end up in that S-bomb and are they keeping their S-bomb up to date? And I think when you start to look, especially when you were saying around SaaS-delivered services and things of that nature, where they're trying to get things out as fast as possible as much as once an hour, once a day, at least a couple times a week that they're pushing new code and things are being introduced into there at a great rate. And going and scanning per se the entire system for everything is just unattainable. Having built those myself in the past, I think you start to look at it and go, okay, how do we keep up with that? How is the open source community really trying to address that? I know there's a number of projects that I can't recall off the top of my head. I know there's actually one that my team is working on. We call it the component registry. And the premise behind that is if you're scanning after the fact, it's going to be bad information, right? Because there's going to be like embedded dependencies, other transient dependencies, things like that. The right way to do it is to collect all that information as you're putting things into your container, your build or whatever, right? So what we're doing is as you're putting that information into the container or into a build, we're extracting the relevant S-Bum material for lack of a better term and storing that so we can go, we intend to build this thing, we know what it looks like because we built it this way at this time. And then once that gets released, we already have that corpus of data that says this is what the S-Bum will be. Now if something has to change, right? You can sit there and replace that piece of information because you've introduced a new piece of code, fine, right? It is at the build stage that you have to create the S-Bum. Scanning after the fact, it's never going to work. So it's more the shift left, the sec dev ops type of mentality that needs to really get there. Yeah, the same idea is like if you want to see vulnerabilities in the code as you're building it, right? You want to be able to do that before you release it to a customer, not after you release it to a customer, like, oh, sorry guys, didn't mean to do that, right? You want to know as you're building it, do I actually have to stop this? Is that an acceptable vulnerability because maybe it's minor? Or do we have to stop and fix it? That all happens during build. You need this stuff to happen during build. And that's where you're collecting your information for your S-Bum. You're doing some vulnerability scanning. That's where you're looking for things like, hey, is there any leaked credentials in the code? That's happened, right? So we want to make sure that as you're kind of going through that build process, that's where all those checks need to be. If you're looking for it after, it's too late. Vincent, talk about your career in open source, how you got in, how long you've been on this journey, and how you see today's landscape because we've been, all three of us have been around the block seeing some things, good, bad, and the ugly. And what we're seeing now is a massive, another step function, open source growth coming. And there's a lot of opportunities, a lot of young generation coming in. We've never loaded Linux on a server. They don't even know what that is. It's software like actually having a CD or disks, for that matter. So, not to have this get off my lawn kind of mindset, but there's a great opportunity for this next generation of open source coming. Take us through what your journey's been in open source and then what you see right now in front of you and opportunities that we should be focused on. Yeah, no, I mean, I've been doing this for over 20 years. I had my own consultant company basically doing web development and things like that. I worked for Linux Mandrake. Mandrake Soft back in the day. Worked for them for eight years and I was their security team. I didn't start, it's kind of funny. I didn't start as a security team. Started doing documentation and general packaging. And they just kind of like, yeah, there's a couple vulnerabilities every month or so. Do you want to like be responsible for that? Yeah, whatever. You have no idea when you just accept it. Yeah, I had no idea, right? And then so. The one-way door right there. Eight years later, right? I was full-time doing all the security work multiple times per week. I had the entire build system in my basement. I could say this stuff now because they're not around anymore. But the entire build system for Linux Mandrake was in my basement, right? So I built all the security updates, patches, pushed them out, et cetera. Moved over to Red Hat, been in the product security team since 14 years now. I love open source. Like when I started, it was the coolest thing to be able to create software, make it freely available. There was a stint for about five years. I created my own secure Linux distro. So I built this thing from scratch. Or I built my own installer and all of this stuff. Learned a ton. Is the most probably stressful and interesting period of time because you learned a lot. And I think it's a lot more complicated now. Like all of the Kubernetes clouds that like, it's just, there's a lot more software. There's a lot more to learn. There's new technologies that are changing rapidly. But sometimes I get concerned about, 20 years ago they were talking about vendor lock-in. Now it's like, how do I not, it's almost like an ecosystem lock-in because they started using this thing and five new things have sprung up that are potentially better and I want to switch to that. But now I have, I was talking with someone the other day, like I have 5,000 Jenkins jobs. How do I move from that to something else? I got to redo all of those jobs, right? So there's a complexity now in open source, but I think it is, it is ecosystem driven, which by the way is a good thing and a bad thing. If you kind of locked in, then you see people migrate from one project to another. Like for instance, I remember when OpenStack had their, they would sort of settle in once Amazon Cloud started going in that they settled in on more infrastructure. They got a big event coming up here in Vancouver in that called now open infra. And then CNCF took a lot of the cloud native aspect of that community over here. So you have migration. Amongst people. Interestingly, that's where automation and AI could probably help. Explain. Well, if you sit there and can write ways to migrate tasks from one, just say Jenkins is an example, to Tecton is another example, right? If we can write tools that do that auto migration, right? That makes the process of moving from one to the other a lot less painful, right? And maybe that's something like this community is like, oh, okay, there's all these tools here. We want to drive adoption. So we're going to create some of that tooling that does that for you, right? And also the other thing we're seeing is projects that should be abandoned should just go their way. And sometimes projects that look like they're not working should be working. So there's a lot of like information around what projects, it's not just GitHub stars. It's like more of like, there are projects that legitimately should be thriving that might need more attention and some projects should just kind of go their way down. Well, that's where you look at some of the, say community health metrics or like, the open SSFs, say their scorecard, right? You can look at some of these things and determine, like there's some projects that have just lived their life, right? Like so that operating system that I built, I supported it for five years, four or five releases. And at the end, I was just like, I'm done. Like I just, I can't anymore. Put a fork in it. Yeah, exactly. So I stopped. It's fully available still. Anyone who's interested could look at it. I mean, I wouldn't suggest it. It's been 10 years or so, but it's still there, but there's no support. There's no additional development because I knew that as time went on, there were other operating systems that met what it was that I was trying to do. Well, Vincent, that could be a candidate for the computer history museum as we get older. Don't laugh. Lutucker has product in there. Lutucker, former CUBE alumni, part of the community. Vincent, thanks for coming on theCUBE. Really appreciate you taking the time. Thanks for sharing your journey and the commentary on SBOM and your view on security and AI and regulation. Really appreciate that. Thank you. No, thank you for having me. Okay, this is theCUBE's coverage here in Vancouver Open Source Summit 20. I'm John Furrier, Rob Strecce. Breaking it down, unpacking open source where it is, where it's going. This is theCUBE. We'll be right back with more after this short break.