 Good morning, good afternoon, good evening, wherever you're hailing from. Welcome to episode three of Kubernetes by example, Insider. I am Chris Short, host of Red Hat Live Streaming, and I'm joined by some of my favorite Red Hatters. I'm joined by the one and only Langdon White, Gordon Tillmore, and Luke Hines is our special guest today. Langdon, you're back. Formerly a Red Hatter. Yes, formerly a Red Hatter. Yeah, I haven't gotten used to that yet. It's only been a few years. That's a hard habit to break, isn't it? Exactly. Well, I try to keep the reminder with, I have my, oh, you don't have the hat in the background today. I was looking for matching hats. As you can see, I have a Boston University hat behind me over my left shoulder. Yes. But you often have the hat there. So welcome to everybody for the third edition, as you said, of the Kubernetes by example, Insider. Although we may be changing the name soon, I don't know. You know, probably everything will change around the same time as the Level Up Hour swag shows up is my current theory. So yeah. You had to do that. That one hurts, Gordon's heart. Had to put the big game. So today, we always like to start the show with a little bit of kind of news from what's going on in the Kubernetes world. And unfortunately, our normal news host is currently on vacation, I believe, or starting school. And so we have Gordon here for us as a fill-in replacement. And I'm sure he will do a lovely job. Maybe not as good as Mina, but it'll be close. It's, man, big shoes to fill, right? So yeah, Mina is off with family in Turkey. And hopefully she's not watching this. Hopefully she's exclusively with family and steering away from work. Yes, I am. I'm filling in this month and I'll do what I can to match up to Mina's high standards. But yeah, let's start with news. I'll be judging, really. You'll be judging? Is there like, are you the East German judge? Am I gonna get a bad mark from you? We'll see. All right, well, I'm gonna do things a little different than Mina. Mina would cover, oh gosh, 10, 12, sometimes articles. I'm gonna focus on four. I think, and in particular, there's a theme here, a security focus this month. A lot of security articles popping up over the wire as late, and I guess it's no different than any other month, right? But, but... It's been heavier this month than others, yeah. It's been heavier, I would agree with that. And we also have, of course, this feeds in really nicely with our guests who are so happy to join us. But let's start with the first article here. So all of these articles can be found on KBE News. So go to Kubernetes by example and navigate over to the news section. I'll put links in the chat window here in a moment. But the first article we're gonna cover is the what and the why of cloud native security. And that comes to us, a courtesy of the folks at Container Journal. And the whole main gist of this article is that too many organizations, too many, still rely on traditional security efforts for their cloud native architectures, right? So, you know, traditional tools, traditional approaches, traditional policies even. And this is, of course, a mistake for so many reasons, right? So the author then goes on to define cloud native security, which helps. But he also breaks it down into something I hadn't heard of. Maybe you guys have heard of it. Maybe I think he coined this phrase actually. It's called the four Cs. So if you picture a cake starting at the bottom layer with the platform with cloud, right? And then you move up to the next layer and it's cluster and then you move up to a third layer and it's container and then code is your final. So this four Cs and the whole idea behind this is, look, each of these layers is different and you gotta treat each of these layers differently with your security approach. Right. So it is a cool article. And I will post that link up here in a moment, as I mentioned. But that leads nicely to the second article that I wanna focus on. And that is called, it's how to harden Kubernetes systems and minimize risk. And it's from help net security, but it's in, these guys are essentially showcasing a recent report that was put out by our friends at the NSA and also the cyber security and infrastructure security agency, say that five times fast. But so NSA and CISA, right? And so the report lays out the primary threats to Kubernetes environments and actions that we should all take to combat these threats. But Chris Langdon, Luke, even, how about a quiz? In your opinion, what are the three, I'm keeping it interactive here. What are the three leading types of attacks in Kubernetes environments, according to our friends at the NSA? Any ideas? And I read this and I've forgotten it. Damn it. DNS poisoning? Oh, that's one. Okay. Very good. I think like a hacker apparently. Man in the middle. Are you scared? No. Well, here, let me give you that too. So data theft, very general, of course. That's really broad. Yeah. I know. I know they're going, they're going broad with a lot of this. But the second one was interesting. Computational power theft. Oh no, I read a paper about that this morning. Is this like Bitcoin mining on Kubernetes or whatever? Yeah. Exactly, cryptocurrency mining. Yes. And it always comes back to cryptocurrency, doesn't it? But there you have it. So thank you, NSA. But they spell out any number of ways that we can all minimize these threats, right? Anything from scanning pods and clusters for vulnerabilities to running containers and pods with the latest privileges or the least privileges possible and using strong authentication firewalls, et cetera, all that fun stuff. But the full report, this full article courses is available on KBE news. I, again, will post all these links here in a moment. The third item, the third article I want to speak to you guys about this morning. Interesting. It's called Concerned About Kubernetes Security. Check out these three tools. So Tech GenX puts this out. They focus on these four open source tools that will help us maintain Kubernetes security. I'd heard of one of them. I'm curious if you guys have heard of any of these four. You probably have. You're closer to it than I am. But Cube Hunter is one. So finding security holes and we have or haven't, Chris. Curious. Have. Have, yes. Okay, that's the one I've heard of as well. It's been featured in newsletter kind of thing. There you go. Yeah. Cube Burner is another. So stress testing. I've heard of that one now. Sonobooey, which is determines overall security levels of a Kubernetes cluster by running a set of plugins. And then the fourth one's my favorite one. And just because of the name, and I haven't heard of this, maybe you guys probably have. Powerful Seal. Anybody heard of that? It injects failure. Like seal, like a. Yeah, like seal, like the animal. It was spelled the same anyway. Yeah. Well, you're right, Langdon. So in my mind, I'm going to gravitate towards the animal then. Well, I'm sure it's a pun, right? Like, yeah. It's like has to be. Yeah. That's pretty good. The whole idea behind this, here's the cool thing. It injects failure into Kubernetes clusters and, you know, basically tests the admins and users. How quickly can you fix this, right? So what they call it complete chaos experiments, which is fine. Yeah, so it's like Netflix chaos tools, which are like still like blow my mind the whole, you know, let's knock out a server and see how we react. Let's knock out an entire service in a certain region and let's see how we react. Yeah, that's pretty wild. There you go. Yeah, exactly. I think that's it. Yeah, like if you're getting to the point where you can use tools like that, you're in a good place. You're doing something right. Right. You know, which I think is rare enough. Oh, I forgot I have a Kubernetes by example coffee mug, I could have brought that. I do too. I totally spaced. It has been co-opted by one of my children, so that's part of the problem. You know, I'll give you guys a quick hint with this. With Kubernetes by example, mugs and shirts and everything else, look to see them on the Red Hat Cool Stuff store soon. Oh, cool. Stay tuned. I know you always like talking about swag, Langdon. I do like swag, yes. I like buying it. Yeah, well, there you go. We might even be able to get some freebies your way. Chris, we'll see what we can do. We'll see what we can do. So anyway, let me tell you, you got the fourth item on my list, the final item, and I think the coolest of the four, frankly, because, number one, I love Wired Magazine. And yes, I still am an old schooler who gets the actual physical Wired Magazine. You have to subscribe to the magazine to get the cheapest way to get unlocked website. There is that. Can't you do it with ads, though? I think you can. I don't know. I don't know. Yeah, I'm not sure. I don't know. But yeah, I still subscribe to the magazine. I've been reading. There's something about sitting out since the 90s. There you go. Well, you beat me on that one. But there's something about sitting on a couch or on your porch and reading an actual physical magazine that still appeals to me. Maybe that speaks to my age. But anyway, let's talk about the article in here. And it's cool because it's from Wired, but it's also cool because it features somebody near and dear to the Kubernetes community and somebody appearing on the call to the show today. A new tool wants to save open source from supply chain attacks. So we've all been hearing more and more about supply chain attacks in the news as of late. So in essence, right, a hacker slips some code, bad code into legitimate software. It propagates and, well, before you know it, destruction ensues. So we saw that with, heck, this Russian SolarWinds cyber espionage effort, which we're still plagued by, and I think we saw it with Nat Petschia, the malware attack. Thank you, Putin, for those. And this article talks about how we combat that, right? And the big item on the list to combat that is something called SIGSTORE and code signing. And who would know about SIGSTORE and code signing? I have no idea. Yeah, yeah. But man, if that's not a perfect segue for you, Langdon and Chris, I don't know. I can't do anything more for you. Right, right. I think that's pretty good. I did want to make a quick comment about the first article you mentioned, which was kind of talking about the three or four C's there. I've been reading a lot of spy novels lately. And one of the things that kind of keeps coming up is layered security, right? Is that if you have multiple layers of your security mechanism, and this is talking about in the physical world, right, that's the best way to do it because you breach one and then you have to breach the next one, right? They're not dependent on each other. And I think in the back of a lot of technologists' kind of mind, right, they knew about this concept, right, for a long time. But it's really started to show in, I would say, more recent years is that we're really starting to have a much better effort towards that. If you just go to a firewall with SC Linux, right, that's kind of multiple layers. But with that, let's transition to our guest for today. And that's Luke Hines, who is conveniently quoted in the Wired article that Gordon just brought up. And we invited him here today to talk about SigStore and talk about kind of like why, what do we want to do there? But before we do that, Luke, do you want to give a brief introduction to yourself? As I often joke, it's very hard to keep track of titles and organizations inside Red Hat. And now that I'm not even employed there, it's even worse. So rather than me guessing and being wrong, I will just say, could you please introduce yourself? Tell us what you do for Red Hat. And then we can kind of start talking about Kubernetes. Sure, yeah. OK, so first of all, thanks for having me on the show. It's great to be here. Really excited to take part in this. And yeah, so I am Luke Hines, working Red Hat. That's already outlined. I'm in the office of the CTO. OK, and in the office of the CTO, we have a department called Emerging Technologies, OK? And our focus is one or two years out, essentially. Technology that is not considered enterprise grade as yet is perhaps new idioms that are being discovered. And projects of various projects are trying to collaborate to find consensus on a particular project that can become a solution to a particular problem set. So I have a team of engineers that I lead in the security domain, OK? And we look at all sorts of areas. Predominantly cloud native, OK? There's a lot of stuff around cloud native stuff, so container runtime, security, but also things like trusted execution environments, trusted computing, so the TPMs and so forth. And of course, software signing, OK? And software transparency and supply chain security. So yeah, I've been at Red Hat for just coming up for six years now. And have a long history in security and open source as well. So. Cool. And in your history in security, has security been getting better? Or is it about the same? Wow, that's a hard one. Peaks and troughs, really. I guess a little bit like the Bitcoin chart. You think it's going the right way, and then it kind of, you know, dips and changes. You know, it's funny you mentioned the peaks and troughs because yeah, we've been in worse situation security-wise than we are today, for sure. Yes. But we've been in better too. Yeah. I think to be fair, there's been a lot of disruption, OK? So if we look at security, so Landon, you made the point earlier about layer defense, OK? So traditionally, security has been a relatively simple ground, OK, to think about. You had a green zone and a red zone, OK? So effectively, everything outside there is just not trusted, OK? Everything inside is trusted. And this is our citadel, and we protect that. So I used to work on firewalls quite a long time ago, and you know, you literally had a red interface and a green interface. Good and bad. Good and bad cops, I think. Cloud came along, OK? And then the principles are last, is to see scalability, hybrid infrastructure, hybrid cloud, all these sorts of things. And it just turned the whole thing on its head, essentially, where the trust boundaries were no longer easy to define. It was a very mixed grouping of security controls that needed to be thought about and then implemented. And then, of course, software has accelerated, software is eating the world. So we're starting to see projects utilize a magnitude of dependencies that come from multiple different sources. So in a lot of ways, security has definitely got to be more of a challenge, but it's an exciting challenge, I feel. But I think in the olden days, I'm sounding like the old guy now, I would have said it was simpler. The attacks could have been very complex, you know, and the particular vulnerabilities that there were, but the whole architecture was definitely easier to grapple with, really. So I would slightly disagree with you on one aspect, which is that one of the things that I thought as a consultant in those days about the move to the cloud was it actually made people start to think significantly more about the kind of like holes in their firewall or whatever. Like I still remember working with a lot of banks and basically they'd be like most banks, right? They run kind of batch jobs of processing overnight or whatever, and they would have all of these holes in their firewalls from all these different organizations they worked with. But then when they kind of moved to the cloud, they did a bunch of things, right? They started to split their services up across virtual machines, right? They started to have to be conscious of like what was routing where. And so while I completely agree with you, it is way more complex. I think some of that complexity helped to drive now we need people who think about this problem way more than it's just an extra job the sysad then has to do. So which I think is kind of an interesting kind of ramification that while you're completely right, I think in some ways it actually had a positive net effect, even though the complexity kind of went way up. If you know what I mean, you know? Well, I mean, to speak to Luke's point, we wouldn't have, I mean, we have things like let's encrypt now where more of the internet is served over a secure connection than not. So I would say, yeah, that's a win for security, but we've also evolved from the days of like worms and such to now it's like a massive distributed denial of service attack, right? So instead of having to infect systems, now we can just reflect attack against other systems and we end up in the same outcome, right? So it's kind of weird, right? Like the scaling has given us more capabilities as well as the adversaries too. So yeah. And at the same time, it's introduced automation and agility to security as well. So yeah. And companies like Cloudflare too, right? You know, who basically their job is, you know, figuring out DDOS, you know? And you know, in solving for that problem. Systems are a lot more ephemeral as well. You know, you look at the old traditional system, you kind of, you spend two days installing your operating system, getting all the network cards working. And it's like, right, I just have to remember to run pseudo-yum update every two weeks. And, you know, it was very monolithic and obviously now everything is very ephemeral. All right. So let's bring it back around to Kubernetes and kind of open source. So one of our kind of standard questions that we like to do on the show is, you know, and this may be independent for you, but it's like, so what brought you to open source or what brought you to Kubernetes first? I'm not sure which one you were involved in beforehand. You know, if you were, you know, why open source? So, okay. So open source goes, we're going years back here. Okay. So probably I have to admit about 20 years ago. Yeah. Yeah, I know the feeling. Yeah. So this was, there was, I worked for, as a software developer, okay? And we were developing a speech recognition engine. So this was actually for mobile devices. So this is stuff like Carmo S, Windows CE. Okay, it's kind of all technology. And we had a couple of offices, one in London and one sort of out in the sticks where I was. And we needed a simple point to point office VPN connection, okay? Because like we had an exchange server and the sales folks needed their email and stuff like that. And I predominantly, Windows had been my gig really. I didn't really know much about Linux at all, okay? And I started to play around. I got a very early version of RHEL, Red Hat Linux, okay? And installed it. And then I remember sort of double clicking on things, thinking they'd be like an executable and nothing was happening. Quite confused, this new alien world sort of thing. Like I had to total my whole frame of perception around operating the computer was specific to Windows. I hadn't even touched a Mac. And so anyhow, I kind of fell into that Pandora's box and fell in love with it. And so anyhow, I had to create this sort of VPN tunnel effectively and then think about, I think I used to call them road warriors, people that would be dialing into your network. And so it was, I think the technology at the time was PP, point to point tunneling protocol. Oh yeah. Oh yeah. There was a few open source projects. Yeah, so I started to get those working and I had to learn how to compile things, compile a module for the kernel and so forth like that. So I got involved helping them with documentation, trying to sort of fix things that didn't work. And interestingly enough, the distribution where I was quite, I was more prolific with the work that I was doing was RHEL 8. Well, this was actually called Red Hat 8. Red Hat, yeah. Which was Red Hat 8 like years ago. And like I'm sitting here with that. 1990s, Red Hat, yeah. Yeah, so yeah, I kind of caught the buzz there really. I became a Linux user effectively. I joined the Linux user group. And yeah, and then it just sort of throughout the years I sort of gravitated to that ecosystem more and security had always been something that interested me. And I started to sort of gravitate towards that area. And that's where I am today really. So I've kind of got a fairly long history of working in open source. But I'd say really the past sort of six, seven years that's been accelerated where I've worked on projects and really started to contribute code and sort of become part of communities and stuff like that. So I guess that's more of a sort of power user, I guess you'd consider. Right, right. Yeah, which is how we get most of it, don't we? We start off as users and sys admins. That's part of like in some ways it's like the drive for like Fedora ambassadors or whatever to try to drive adoption, right? Because the best way you get contributors is people using your software, right? And then something annoys them and then they want to fix it. And so they come along and become a contributor. Going back to the Red Hat Linux 8, I think the next version of RHEL will be the first version that actually goes past the version numbers of Red Hat Linux. That's a good point. Yeah, yeah. So I'm hoping for some sort of big celebration as a result. I'm curious if they actually end up doing a nine because a nine is a bad luck number in some parts of the world. That's why Apple skipped nine. Oh, I didn't realize that. Yeah, so mostly like some Asian countries, I can't remember which exactly, that part of the world. So all right, so kind of going back to Kubernetes. OK, so you talked about kind of becoming a Linux power user that kind of pulled you into the fold. So what attracted you about Kubernetes and what brought you to that particular project? Yeah, sure. So I followed Kubernetes from pretty much the early days. I think when it first started creeping outside of Google and getting known by various, getting on the map of various people as an interest in technology. And I'd actually been an open stacker. So another sort of cloud type infrastructure project. So I'd been working there for quite some time. And I had a community position as a project team lead, like an elected lead within the community for security. So in there, we would manage sort of help with embargoes and creating projects. We sort of do all sorts of things, really. Documentation, it was a kind of a multi-rolled group. And there's quite a few people collaborating. And things started to quiet down there. So security started to establish itself quite well in open stack. And I'd been following Kubernetes for a while, but I wasn't prolifically developing to the project. And somebody that I know was on this security, used to be called the product security team. So now we've just renamed it to the security response team. And they wanted to rotate out. And they'd heard that I'd had a fair amount of experience managing embargoed responsible disclosure type, vulnerability programs in open source projects. So they asked if I'd be interested. I said a certain word. And from there, I started to get involved. So I kind of came into Kubernetes security without a very in-depth knowledge of the code base. Generally with security, it's the same things that you see happen in it. It's, you know, XSS, attack, SQL injections. The language is different. You need to learn the architecture a bit more. And so that really sort of got me more involved in really starting to ramp up on what I understood about the architecture of Kubernetes, what was a pod, what was the host, what levels of access did the container have to the host, and what was the scheduler. And I just, you know, because I was looking at these vulnerabilities that were coming in and would have to sort of replicate those to make sure that they were in fact real vulnerabilities. So, you know, I had to learn how to stand up a cluster quickly and, you know, pull in an old commit and, you know, and look into the, you know, just all this sort of stuff really to validate the security problems or whatever. Yeah, yeah, that makes sense. Yeah, I mean, to your point about them, you know, it's like there's nothing new under the sun kind of right. I mean, that's why I think it's OASprate has that nice set of I think it's 11, you know, top vulnerabilities you should watch where, you know, like there's like, if you do a little bit of research as an average developer, you can have a really good idea of the traps you're likely to fall into, which is one of the nicer things about kind of doing a secure, you know, development that it can be tough. All right, so that brought you to the security response team. And then you were largely involved with that, I assume, primarily for a while. What kind of brought the SIG store kind of concepts forward for you or what, you know, like what it's where you're trying to scratch? Yeah, sure, so I started to pivot to my focus to secure supply chain. It's about two years ago, okay. And I had this idea of, so I took an interest in these things called Merkle trees, okay. And it's nothing to do with the German Chancellor. It's this kind of cryptographic algorithm, okay. That's not even where my mind went. It's bad joke number one, okay. Good thing you play around. So hopefully I won't make any jokes that I have to apologize about later on. Yes, yes, yeah, you'll be fine. So, yeah, I used to talk to these about, so Brandon Phillips, he used to have core OS and was at Red Hat for a while. Kind of turned me on to this principle of Merkle trees, okay. And at the same time, I'd been wanting to find a some sort of system that can act as a source of truth around what's happened in a secure supply chain. Because a lot of the time you're relying on auditing systems that are susceptible to manipulation, effectively. You know, if it's like a logging system or, you know, syslog or some sort of data store, a hacker could one breach a system and then they could cover up their tracks effectively. And just this, there was, that was one aspect and there's this kind of, again, this sort of spaghetti spool of dependencies. And, you know, and this is what they call S-bomb now. I don't even have the term S-bomb probably did. I think bomb was a thing, build a material. Bomb was a thing, yeah, yeah, yeah. And so I was just thinking it would be great if we had this sort of, some sort of source of truth that has an immutable structure again. So, Merkel trains the perfect example there. So, it started to experiment and to prototype around that technology and came up with a project that I called RECOR, okay? And RECOR is Greek for record, okay? So, I use Greek words a lot of projects do as well. Tecton is a Greek word. I can't remember what the translation is. So, I write this prototype called RECOR, okay? And it's essentially a Merkel tree. So, I should explain for folks what a Merkel tree is. So, to put it simply, Merkel trees are actually leveraged in a few technologies. You'll find them in blockchain, the transactions are signed with a Merkel tree. Git actually operates a form of Merkel tree, okay? And BitTorrent is quite an old technology, not an old technology, but it's a technology that's utilized a Merkel tree for quite some time. And a Merkel tree is essentially, you have these things called digests, okay? And they're fingerprints of any type of artifact, okay? So, it'll be a long string of numbers and letters, okay? And then it kind of represents the integrity state of a particular object. And then the idea is if you change a single bit of that object, it would change the entire string. Right. So, what a Merkel tree does is it takes these digests and it has, let's say you have a layer of eight, it adds two together and it hashes those which then go up to four and you go up the tree till eventually you have a root hash, which is a bit like a commit hash effectively. And that's kind of a very good representation of the integrity structure of that tree, okay? So, that's these things called Merkel trees. And so, I had this project recall which leveraged that and then built like a simple API over the top, a RESTful API so that people could make conclusions into the tree and they could verify that something is in the tree and it's not been tampered with effectively. So, I came up with that idea and had a prototype and this is where it's a kind of classic open source story really. I had this code, wasn't quite sure what to do next, so I thought I need to share this with some people, see what they think, see what I mean. And there's an engineer at Google, Dan Lawrence, who I've been collaborating with around different projects, Tecton CD and the OpenSSF, the OpenSource Security Foundation. So, I shared it with Dan and said, Dan, I've got this thing, I'm not quite sure what to do next. You're interested and he said, yeah, I'd like to contribute to this. So, then there was two of us, if you see what I mean. And then another guy, Red Hat Bob Callaway, he got involved and started to refactor the code and improve things and it just sort of expanded from there really, it just sort of blew up essentially. And this, the problem was when we originally called it Recor, but then we found out there's a company operating under the name of Recor. So, then we had to do a name change and we had to do one quite quickly because we were speaking to the Linux Foundation. A bit of a driver. So, then you've got kind of trademark lawyers and that sort of stuff. So, we came up with SIG Store, so we've signature a store because we kind of took this concept to the transparency log and then it evolved to be in like a sign-in, part of an overall sign-in system which we have in SIG Store now. Because originally it was just the data store, okay. And we then realized that, great, we've got this way of recording events, okay. We want those events to be signed, cryptographically signed, so that we have non-repudiation around who made those signatures, who signed that artifact, okay. But we then realized that the sign-in tools that are around kind of suck a bit, really, people aren't using them. And then we realized, well, we've got a big problem there. Do you see what I mean? We've got this wonderful transparency log thing now, but how are we gonna get people to use it? Because they don't like sign-in things, so that was the next one, really. And that was where Dan had this really great idea about leveraging open identity connect and stuff like that, okay. So that we could then handle the key management challenge. And yeah, and that just sort of kind of grew and grew and grew until we started to look at sign-in more and more artifact types. So we started with containers, and then somebody would come along and say, yeah, I want to sign a jar file, how can I do that? And I want to sign an S-bomb. And yeah, it just sort of blew up from there, really. So in a lot of ways, it was that kind of, it was that magic of what everybody wants, really, which is the right idea at the right time. Right, right. And consumable, right? And yeah, and usable. And then there's a problem there. There's something that I learned from, who did I steal it from? I should give them credit. Brandon Phillips, again. So I heard him once say, there's this analogy, and I've always stuck with it, which is in software, you can have painkillers or supplements, okay. So a supplement is something that people will say, yeah, that's pretty cool. Yeah, that's interesting. I'll follow what you're doing, okay. But what you have is a supplement. If people forget to take their vitamins in the morning, they don't start losing their mind and turning the car around and going back home. Do you see what I mean? Whereas a painkiller is an immediate need, there is a problem that's hurting and you need to solve it, do you see what I mean? There's a nice physical reminder to take your pill in our order. Yeah, very much, yeah. And so it's a very good sort of spectrum to evaluate is your project useful? Do you see what I mean? Are you solving an issue? Are you a painkiller or are you a supplement? Because I'm somebody that's, I've wrote thousands of supplements, you know. And occasionally you're lucky to get a painkiller. Do you see what I mean? Yeah. Yeah, yeah, totally. So, you know, I kind of am curious. So do you see kind of Sigstore expanding beyond the kind of Kubernetes realm? Like, is there, you know, is it a more generic solution? Because it kind of sounds like it is. But I don't know, you know, from not knowing the inner workings, I don't know how, like, how dependent it is on that tool chain, in a sense. Yeah, very much, yeah. So with Sigstore, we have the kind of the infrastructure services, which is two of our sort of core projects. So one is Recorp, that I described, which is the transparency log. And the other one's called Forcier, which is the PKI, the CA software sign-in solution. Okay, and those can sort of on their own stand without the other services and still have use, okay? So Recorp's been one that's attracted a lot of people that have these non sort of cloud native cube type usage scenarios. So there's many of them around Recorp. One that's quite interesting recently is Arch Linux, looking at doing binary transparency for Recorp and their security team is starting to implement that. And we've had people looking at sort of firmware blobs or sort of drivers and stuff like that, where those can be recorded in the transparency log. And other people interested in signing documents using this as well, you know. So it's luckily a very customizable system, Recorp. You get to choose what data sets you want to go in. You can design your own manifest essentially. So we call it manifest agility. So we always tried to make it so that we could support any type of schema that comes along that people want to use. So a lot of this was preemptive around the work that is happening around secure bill of materials, test bombs and so forth. So that we know we have the agility to be able to work with different data types and different manifest types and so forth. So yeah, there's a lot of interesting people that are coming forward that are finding they could utilize the technology to solve a particular problem that they have within their particular vertical. Right, right. It's actually immediately what comes to mind for me is that one of the kind of blockchain is such a buzzword, right? But it does have a few things where I think there's a kind of real strong legitimate usage. And it sounds like there might be a good overlap with this one, which is in particular like one of the problems people have is distribution of college transcripts. So like if you wanna go and apply to grad school you need to get your transcript from wherever you did your undergrad and prove what you did. And one of the challenges making sure that that is, if you wanna do it digitally, right? Is making that cryptographically secure. And it sounds like this might actually be a really good solution for that kind of problem as well. Yeah, very much. We had some folks from a project called SecureDrop get in touch. And SecureDrop is kind of like a drop box for journalists and whistleblowers. Got it. Oh yeah. To all that they need to share. And I know other people were looking at utilizing transparency logs as a way to combat fake news. Okay, so effectively what somebody does is they write something about, I don't know, XYZ politician had an affair with an alien, okay. And then that will kind of blow up on social media. And then somebody will call them out. The experts will round up on them. And they'll just change the story. Oh yeah, okay, I've got that wrong. Whereas with something like a transparency log you kind of, you can redact effectively. Yeah, so yeah, there's a lot of really interesting news cases around this technology. Definitely, I expect to see a whole lot of innovation over the next few years. Yeah, yeah, no, it is definitely an interesting problem. I mean, I know from even like, there's even something in HDP, right? That tells you when something was actually printed. But everybody just manufactures that header based on whenever they generate, sometimes they just generate it or freeze. Sometimes it's auto-generated, yeah. All right, and what I wanna know a lot of the time is when did this actually come out, right? Or when was it updated or whatever? And not even from the kind of changing the past kind of model as much as just I'm interested for whatever reason in the history of this thing. And so that kind of data is really, really important. Yeah, I'd really like to see this leveraged for firmware transparency as well. Ooh, that would be great. Because it's an interesting area because I obviously work in the software. Right. Well, and I do, I'm very heavily focused on what can we do to improve software security, okay? And then I look down and I go, oh God, you know, it's just, I don't, you know, there's that whole base layer of in the hardware, these firmware blobs where, you know, it's just, yeah, what we could do to improve things there would be, would have quite a magnitude, definitely. Yeah, one of the projects that was kind of initiated by and then worked on by some people in the Boston Red Hat Office, Anarchs, was one that I found interesting. So that was my team, some of my team were working on it. Yeah, that's what I was kind of wondering. Yeah, like, so like Willie Sturman and, I don't know, is Peter Jones on your team? But, no, no, but he was one of the, yeah, it was kind of a conversational start. Yeah, she's an engineer on my team. She's a great engineer. Yeah, yeah, yeah. Yeah, what am I, I'm trying to rope her into talk on something, but I can't remember what exactly now. Oh yeah, you should do. Yeah, Lily's right. Yeah, she's really good. So moving kind of a little bit to the side, we know one of the things that we wanted to talk to you about was the Hacker One bug bounty program. And so can you tell us a little bit about what that is and why it's interesting? Yeah, very much. So just to kind of do a one-on-one on what we do. So we're the security response team and if a vulnerability is discovered by security research or a programmer or anybody, okay? We like to run a responsible disclosure program. So that means the issue will be handled under embargo, okay? So that way we can make sure there's a fix in place and that vulnerability is not gonna be in the wild for people to exploit while we have a crazy headless rush to try and fix, if you see what I mean. So it's safer for everybody, it's safer for the users. And traditionally people would report for an email address and send us an email, okay? And then we would then have our embargo program which we would kick off and handle things from there. And this was, there was a good volume of issues coming in but we realized that in a lot of ways people should really be rewarded for doing the right thing. So that was one of the main drivers, okay? And it's just more eyes on the code as well, okay? You know, there's this security idiom that we have of the more eyes on the code, the better, the more secure, the more people looking at the code. So yeah, we launched a Hacker One bug bounty program, okay? And so how this operates is that Hacker One have a portal where somebody discovers an issue about a security issue in Kubernetes. They can raise it with Hacker One and Hacker One then have a team that will do an initial triage of the issue, try to establish if it's actually a vulnerability or is it something that's previously already been reported and so forth. And then if it is, it's then sort of escalated to us folks in the security response team. And I think there's about seven of us at the moment and it's a mix of Red Hat, Google, Amazon, somebody from DataDog and forgive me, I might have forgotten some of the others, but it's a kind of a team where we have like a rotor, so we go on call. So I've just come off rotor for last week, okay? And yeah, it's proven to be very effective, the Hacker One program. It means that we, a lot of the noise around false positives is no longer taking up our time because we have that sort of first line support layer to triage and filter. And those that do report something, they get rewarded, which is always good to see. Yeah, totally. Yeah, I think they're kind of bounty programs in general that they kind of been, they're slow to start right there, but maybe they're starting to actually kind of take, but it's a really great way to kind of support open source or support kind of change or whatever, because some people have more money than they have time. Especially if you have kids, for example, I definitely often have more money than I have time and I'd like to see some changes, but can't do it myself, but kind of in the reverse, getting the credit for discovering these things, and it's not just like money, right? It's also the, you know, Hutzpah, right? Or the publicity or whatever around having discovered some of these things, but you need some backing to kind of say, oh yes, you really did discover a real vulnerability. It's not just me claiming on the internet that something happened. And then as you say, right? Embargos are really important. So one of the things that I thought was, I think a lot of people who don't work in software companies in particular, especially even ones that are kind of infrastructure layer of software companies, like you don't realize how important those kind of security embargoes are. We've talked about it on the channel a few times, but basically it's like it was common for me to be in a meeting or something like that or whatever. And they'd be like, oh, you know, we need to do this other release. And somebody would say, obviously, why do we have to do this other release? And they'd be like, oh, I can't tell you. And it's just par for the course. Like it's just, you expect that to happen on occasion. You know, you don't really need to know, you don't really want to know, you know, and it's just kind of like part of your culture. And I think that's something that people don't necessarily from the outside recognize or realize how important, like not only it's kind of obvious, I think how important it is, but I don't think it's as obvious how common and how acceptable it is. And so when you do a thing like a bug bounty program like this or whatever, that it's real. Like when you say it's embargoed or it's gonna be a secret until everyone's ready to release it, that's a very, very true thing in most organizations that I've worked with. And so that's a, you know, it's something I think that needs, we need to reassure, you know, people who don't have experience with it, that this is true because it's kind of uncommon, you know. It's the right thing to do. Yeah, yeah. All right, so moving on, one of my more favorite type questions. So from a security perspective, what keeps you up at night about Kubernetes? Yeah, that's a good question. So if I'm really honest, nothing keeps me up at night anymore. Because everyone's called light out, aren't you? Yeah, I don't understand the question. If I do let it, I'm just gonna be a wreck since you see what I mean. I guess really the scary stuff are container breakouts always a concern. You know, I think the main ones is where anybody can sort of access the host or attack adjacent tenants. I think those are the big ones. They're the scary ones, essentially. Or any sort of very nasty privilege escalation, okay, which allows somebody to control a cluster. You know, that's scary stuff where people can destroy pods or perhaps, you know, even more. And I would say those are the two ones really. And yeah, it's interesting with security vulnerabilities. You have the magnitude of the criticality of the vulnerability, that's one thing, okay? And then there's the marketing spin, the kind of the fud, if you like. You get around that vulnerability, you know? And they don't always marry up. That's the thing you see. So for me, I think it would be like both of those on rocket fuel, really. A particularly nasty vulnerability that gets a lot of coverage as well. Perhaps, you know, some high profile people or users of Kubernetes undergo some sort of significant nasty data leakage as a result of that. You see what I mean? So yeah, I guess it would be an orchestration of things. A very nasty, high critical vulnerability that's found in the world and utilized in the world. And then it creates a media frenzy, okay? And then there's a high profile attack. Somebody actually undergoes significant harm because of that, because of that breach. That, for me, is the kind of the... Yeah, that would keep me up at night. So the perfect storm keeps you up at night? Yes. Okay. So yeah, I mean, I think I like that one. But I also think that what SIGSTOR is trying to do is one that particularly concerns me, which is the kind of the SolarWinds problem a little bit, which is that an attack has been delivered and been successful and no one knows. And that, in some ways for me, I think it's the scariest thing, where it's like the quote-unquote sleeper agent, where something can happen eventually, but the infrastructure has already been subordined or whatever to take it over. And I think things, like I said, like SIGSTOR or whatever are or how you solve for that is that I've known a few people who've done some of this kind of work in the past, and it is part of a successful attack to clean up after yourself, right? And if you can modify those logs or whatever to show that you weren't there, then there wasn't, does anybody hear a tree falling in the woods, right? So I think that models where audit logging or logging or whatever, where you're trying to keep track of or have a way of knowing for sure whether anything has been modified, at least it limits the ability to, or limits the problem of an undiscovered attack or undiscovered control of the software system. So yeah, that's my big one. Yeah, very much, yeah. And that's where the transparency component of SIGSTOR is so appealing for the SIGSTOR because we will run a public transparency log, okay? So that means that anybody can audit and monitor that log. So they can check the integrity of that log to make sure we're doing the right thing, okay? And they can check the integrity of any particular entry in that log, okay? So then they can start to look for suspicious patterns, things that are happening out of the normal, if you see what I mean? And so one of our hopes and one of the, one thing that we're really trying to encourage with SIGSTOR is people to come along and innovate on top of our platform. So we have this transparency log, okay? So we're talking to people about running these things called monitors, which will monitor the log, okay? And we'll start to look for certain patterns and so forth. So that's where the recall, the transparency log part is very appealing because as you rightly say, when there is an attack, okay? We're talking a key compromise, which is a particularly very nasty attack. You wanna know the blast radius, okay? So you wanna know what else has been signed with this private key, okay? And with SIGSTOR Recall, you can answer that. You can perform an inclusion proof using the public key to find out exactly what artifacts have been signed and when time to start with that particular signature set. So yeah, it's really sort of a nice application of that technology. But in a lot of ways, it's not new. It's sort of, this is something that we borrowed from certificate transparency. Do a similar thing. So traditionally, a CA would provide, would sign a certificate for somebody, okay? And you wouldn't really know what happened behind the closed doors of that CA, okay? You just have to cross, they're doing the right thing, okay? And then what happened was a couple of very high profile domains. I think Google and Facebook were, somebody went to a CA said, I need a certificate for facebook.com and they got given one, okay? And then you can imagine the amount of damage you can do if you are suddenly able to stand up the server and the browser says, is okay, this is great, facebook.com. If you see what I mean? So this kind of concern is rightly scared a lot of people. So they came up with this idea of certificate transparency, which is a similar thing. The certificate chain from the root CA and the certificate that signed for the particular domain is recorded into a transparency log. And then what that allows you to do is, for example, red hat.com, can then monitor the log for certificates being signed for the domain red hat.com, okay? Now, if they've recently procured a new certificate, meh, yeah, okay, delete the email. Nothing to see there. Whereas if you hadn't, it's like, holy bleep. You know, somebody's managed to get a domain certificate for our property. Do you see what I mean? And so you get the same thing with recall around secure supply chain transparency as well. So one of the technologies that we're looking to utilize is open identity connect, okay? So what you'll be able to do is sign an artifact using an open identity provider, okay? So for example, Google, Microsoft, GitHub, there's various people that provide these open identity provider solutions, okay? And the great thing about that is you can then monitor the log for people signing things with your identity. A bit like, you know, have I been phoned? Right, yeah, I love it. Yeah, yeah, yeah. And the great thing about an email, some people will say, yeah, but an email, it's not as secure as a, you know, a 5,000 pound HSM. Well, yeah, if you want to use that, you can use that with Seegstore, but a majority of the open source projects, your mum and pop small projects, they can't afford specialist hardware and they don't like managing the keys. So this gives them a system where they can sign something using their identity and they have protection of the crowd monitoring as well, okay? And then the nice thing as well is these systems, these providers rather, they give you extra little security trinkets on top like two-factor authentication, you know, that you can use to protect your account. You get that thing if you log in on a new computer, then it will tell one of your trusted systems, hey, did you just log in from this country with this IP address? So there's lots of nice security, existing security controls that we can leverage there. And so that's why we look to use this open identity connect and the email address. And then we've got that same thing as certificate transparency, we can monitor who is using our identity to sign effectively. Yeah, it's interesting. I mean, I don't know if either of you would recall, there was a project from Microsoft many, many years ago called HailStorm. And then a response to it, there was an open source project called the Liberty Alliance or Open Liberty Alliance, something like that, which is I think still around. But the idea of it was kind of you would have, it was like SSO for the internet, right? And without all the problems of using Facebook login, right? Or using Google login, or even to these days, Microsoft login, because it was kind of an independent service, even if it was owned by Microsoft, there was no way to get to the rest of you, right? So I really wanna see, this is one of the things that I think has been missing from the internet for a long time, right? Is that I have an identity on the internet, right? That is protected by something that then I can use to manage with my Twitter profile or my Facebook account or my work account, right? One of the things when I left Red Hat, right? One of the challenges I had was like, oh, wait, I need to rejigger my entire life because there's a lot of things that I use both personally and professionally, you know. Identity, yeah, identity is fascinating, okay? I think about it a lot and a lot of people are a lot cleverer than me think about it a lot. And it's always very difficult because you effectively, you have two choices really, okay? One is tofu, trust on first use. So I just assume this is Langdon, okay? Looks and smells like him, I accept that key. I bring it into my key ring, okay? Or like we do with an SSH server, okay? You click yes to the fingerprint and it's added to your allow host, okay? But that's kind of that has its problems evidently, okay? And then the only other system is like a web of trust. So us guys meet up, you know, we look at each other's passports, we sign each other's keys. That doesn't really scale. That was the host problems and COVID disrupted that even more. Yeah, yeah. So the third one is you need a kind of a trusted entity to attest to this is the thing I'm doing, you know, okay? And we tried that with very sorry. Our CA, our certificate of authority. And it's really difficult to sort of get beyond these models. Right. There really isn't anything that's got traction that has solved this really. I really hope there is, you see. Yep. You know, I'd like for one day for this to all be disrupted. But in a lot of the ways it's like the humble password. You know, people are coming up with biometrics and all sorts of systems to try and disrupt the password. But nothing can quite cut it. Do you see what I mean? Right. So identity is a really tricky one. You know, I would love to see some sort of open decentralized identity system. But identity in decentralized is a strange bed for those. You can't get them to connect. Do you see what I mean? Right. And that's the problem that the blockchain folks have a lot really, is when they try to establish an identity, they typically have to leverage something off chain. I'm probably going to upset some blockchain folks. So it is, it's just that it's the problem that we all have. You know, it's, yeah, I really look forward to the disruption happening there. Well, and to your earlier point of, you know, kind of hitting the right place at the right time and being consumable. It's like we've had a number of attempts, right? Where, you know, like we had some technology or we had some, you know, thing or whatever. And it almost made it or whatever. But it didn't quite, it wasn't quite consumable enough. It wasn't quite at the right time or whatever. So I do, I do also, like I hope, but I think it will be solved. But you know, it's like even, you know, PGP signing emails, right? It's just, you know, it just never took, you know. Exactly, yeah. I mean, that's a really good, that was one of our areas that we've managed to, I think improve a lot really. Yeah, yeah. Is nothing wrong, I'm not talking about the algorithms, their strengths or design or any of that, but it's the adoption at the end of the day. It's just incredibly poor, you know. Very few people sign stuff, you know. It's like I said, for email, PGP is wonderful. I can sign something. You know, it's coming from me. You have non-repudiation. I need to send you something sensitive. I can encrypt it. But if you grab a load of technical people at a conference and say, right, who's using PGP? Very few people. Yeah, very few people actually. Very few people put them up. Right. And it's the same with software signing as well, project signing software, you know. So I think with SigStore, we've come up with this good balance between usability, accessibility and good levels of security protections as well, where we have that transparency. And that's the sort of, I think that's the thing that is really going to help get our traction around adoption really. It's not to say we're better than PGP or we're better than X or Y or it's just, this might be more easily usable by... Well, and, you know, we're far enough along, I think into a lot of encryption kind of solutions, you know, of like signatures or, you know, or actually, you know, making things secure or whatever that at this point, it's really about adoption. And the better is really about who's using the, you know, the most of one thing, right? And that's, to some extent, it's like, we can go fix any security flaws in a sense if we can drive adoption first. You know, it's like, if we discover that, you know, we're using, you know, whatever like SSH a few years ago, right? Switch from like 1028 to 2046. 2048 or 4096. Yeah. And, you know, it's like, that's fixable, you know, because SSH is completely prevalent, you know. And so, we need the same kind of idea and then we can fix the actual bugs, you know, once we have that happen. Because right now, nothing is happening in the right places. No. I use just one example. I really like Rust. Okay, the Rust programming language. So, Rust has, it's really interesting as a security person because you get this memory safety. There's the compiler is very strict around ownership and so forth. And that fixes a lot of the issues that we have in C and C++ effectively, okay? But if you look at their packaging system, everything is pulled in untrusted. So, you have this kind of wonderful performance language with all these extra security guarantees, okay? Around memory safety. But then there's again this spaghetti monster of dependencies coming in. None of it is verified or trusted. It's just pulled in, you know. And so, they're one of the communities that we're hoping to really sort of help them come up with a sort of a six store implementation that could help them really. Right, right. Because this is better for all of us. So, I think with that, we are already over time, but we should probably kind of wrap up there. Is there any kind of closing, anything you wanted to kind of add on? I mean, or I think I will definitely say, go check out six store, you know. And if you want to help contribute, that'd be awesome, but at least start using it, right? We need to drive the adoption for things to get better. Yeah, very much, yeah. So, just to tack on to the end of that, do come along, okay? It's security, but we're a very friendly community. We are really, we're really welcome to be here and to all new people that are interested that they need that. We support people at all sorts of levels of confidence in their coding or documentation. So, do come along and get involved. Awesome. Thanks so much for coming. Thank you, Steph. Yeah. And last but not least, we will be at coupon. We're gonna have a booth. Oh, cool, okay. Awesome, yeah. So, you'll be able to come up and we'll show you how to sign your things. Awesome. Yeah, awesome. All right, thanks, Luke. Thanks, Langdon. Oh, great. And thanks for having me. Thank you, everyone out there. Please check out 6door.dev. Figure out where you can implement it in your environment and what Luke know if there's things you feel like you can improve upon. Right. Thanks, everybody. Have a good one. Thank you. Stay safe out there. Take care. Bye-bye.