 Hi Matthew. Hi Cameron. Can you hear me? Yeah, how are you doing? How are you going? Great, great, really well. Been a while? Yeah, it has, mate. Well over the top. Yeah, absolutely. Did you get down to B-Sides? I did not, unfortunately. I had to stay in Sydney, but how about yourself? No, mate. No, same here. It's a bit crazy with me now with two children. Oh, well, great. So when was the second one born? Nine weeks ago. Oh, wow, fantastic. That's great. So what's their name? Yeah, River. River. Oh, sweet. Yeah, something different. Yeah, absolutely. Yeah, that's good. So, and there was Max, right? Yeah, that's right. Yeah, that's right. Yeah, I remember that. Oh, cool. So they'd only be like under two. Max is 18 months. Two under two, mate. Two under two, keep them together. I had three under four at one stage, so it's a way to do it. Knock them all out, keep them all together. They become great mates. Yeah, that's what I'm thinking. That's what I'm thinking. Yeah. How are things going with stone? Oh, good, good. Yeah, we've been smashing it, Ashley. So a bunch of really good deals and doing a bunch of stuff with, we've actually, we've actually had a whole bunch of new products recently. So we bought a company called Muse Dev. All right. Which is basically a sort of a static analysis product, a bit similar to Sonacube, I guess. Yeah. And just to expand our coverage of that space, because a lot of people want that as well. Yeah, sort of single throat to choke. Yeah, no, for sure. Yeah. And then we also have got a strategic partnership with Noi Vector, sort of adding the container piece as well. Okay. What are you guys working with containers with Noi Vector? Yeah, so basically just working with them, because they've got this really clever behavioural analysis stuff to do dynamic sort of port blocking and process blocking and sort of teach it programmatically what processes are good and bad. And that's sort of pretty innovative. No one else has got that. So yeah, that's sort of the differentiator. So you don't have to sit there and whitelist all the processes. It'll actually do it automatically based on, you know, putting into a learning mode. Yeah. No, for sure. All right. That makes sense. Hey, everybody else. Hey, Matt. Hey, boys. Hey, Brad. Hey, how are you? Hey, Adam. Oh, I'm good, mate. I'm good. How are you catching up with me this afternoon, mate? Me? Brad, yeah? Yeah, we can do, yeah. Yeah, I forgot about it if we were, but yeah, no, definitely. Six o'clock. Oh, yeah, yeah. I don't know. Yeah, that's fine. All right. Nice. Nice. Nice. Might add in my marketing person as well to be a fly on the wall. Yeah. Yeah. Oh, cool, man. So how's everyone doing? Terrific. Just nervous as before every presentation, Matt, you know. All right. All right. We'll look at 12 o'clock. We might as well get the ball rolling. So thanks everybody who's here. Hey, Justin. My hand over to our fearless leader, Justin. He can handle it. All right. Well, he's muted, but I'll just finish what I should have said. Thanks everyone who's here so far. Nice to see you all again for those who I already know. I've organized with Andrea St here from Red Hat to give a bit of a presentation on software supply and security by the usage of software factories. If you're familiar with if you're familiar with the DOD DevSecOps framework as part of platform one, they introduced this concept of software factories and it's it's pretty cool. So he will handle that. I thought I'd copy a note from the previous agenda on April 21 just to remind everybody that well, KubeCon is coming up and as part of that, there's cloud native security day on May the 4th. So if you haven't already checked that out, I think you're going to go to it, aren't you, Brad? What's that? Sorry. You're going to go to KubeCon? Yeah, definitely. Yeah. I normally go each year. It's not good timing, but I just forced myself to stay out. No, it's not. Is anybody else going to that? I'm going to be there in spirit, digitally. We should do a little Zoom meeting to hang out in between or just have a discussion. Yeah. Yeah, we could have coffee for the first couple of hours and then beer. I normally have an Irish coffee, so I put whiskey in my coffee. Yeah, very cool. Well, look, that's kind of all I wanted to say. Justin, I noticed you're here. Do you want to take over? I think it might be practical. My system is saying it's going to do a software update in a moment and I have a feeling that that means that I'm going to be disconnected. So you've been doing a great job. Let's have you just continue. Cool, man. All right. We'll look on that note. I hope your system goes okay. Do you want to just get started, Andreas, with this presentation? I think that's why everyone's here. We're all pretty excited to see this. Sure. Mark, do you want to share your screen? And I'll also say hi to... We've got other red headers on the call today as well. Andy Block from the US is one of our principal architects there or distinguished architects. And we've got Adam Gussin, Shane Bolden on there and Mark Hildenbrand. And we all work for Red Hat, but we all sort of also are interested in this topic. And I'm probably the one who's done the least amount of work in there. So Qtus doesn't belong to me. It's more the people I just mentioned, right? Especially Mark, who was building this demo in the last few weeks. And so, yeah, let's kick off. So where did the idea come from? So like many, a few months ago now, right? Many weeks, I basically watched a recording from one of the CNCF6 security sessions in the US. And then John Meadows was heading this up and Andy Martin was sort of presenting the concept of a software factory. And I posted some screenshots in there, right? So multiple CSED pipelines can be composed into a complex build system. And this is called a software factory. And it's basically, you know, used to securely build and deploy all components of a system. And at the same time, we were then writing that, that white paper about supply chain security. And we've learned about the SolarWinds attack, right? And then we looked at this, this architecture diagram on the, on the right-hand side and Mark and I basically looked at that and go like, hmm, what are the components and how could we make this better? Then we reached out in good old open source fashion to the wider redhead community across the globe. And we actually found that, that our teams that work with federal governments across the globe specifically have already started an open source project called Ploygos. And we've got the, we've got the links in here, but that, that is actually the, you know, open source way of, or approach to basically build such a software factory. The only thing we didn't quite, or I didn't quite like about the software factory secure bootstrap is that it sort of, the idea was to have a laptop locked away in a, in a vault where you basically would start off that. And, and I thought in a modern enterprise that probably doesn't work that well. So we need to come up with, with other things. And even though boot attestation is not part of this demo today, you know, I think it has a lot of vital building blocks to actually, you know, bring those secure software supply chains into, into enterprises. And then, you know, as you can see, what's the problem that's not from us, that's from the previous session like large problem space requiring an end-to-end solution. And that's really what this Ploygos is, is all trying to do. And on the other, on the left-hand side, we've got the, the sort of famous in, in the six security space, right? The DOD Enterprise DevSecOps reference design. And so I think all the teams, that was sort of a common denominator. We all knew about it. And, and we all sort of, you know, agreed with that approach. And that's also why would you see today is aligned with that reference design. So Mark, do you want to get to the next page? So in good, again, in good open source fashion, the UI is probably the last thing that gets updated. So what you see is a flurry of text messages. I'm just going to run you through a couple of screens so that you get used to that when you see it in the demo, what it is. This is really just the text output. And there's usually pass messages in there, but the, the orange highlighted text is basically failed. So that's why, you know, something is wrong. It was interesting. Mark told me the story because we were, like Mark was rebuilding that many times over, right? And, and it fails often because, you know, some, some parts of the, of the project, the open source project has been updated and then you need to sort of reach it all, all the parts. And then when it went this time, you know, Mark was thinking, ah, no, what, what changed now? And then he realized actually nothing changed, but the software supply chain did actually its job because what it realized is the, as you can see the title for this step is ensure software patches installed. And there was a new vulnerability. And by checking the content, it actually checked and, and realized that the patches have not been all updated. And that's why it failed. And that was a great case to, to show us that this is really a good approach and it's working. All right. So next slide then. This is a, this is an open shift screen. This is the deployment topology. Like all those components are being installed. By an operator and that operator basically takes care of all the components you need to run and set up your, your software supply chain or your software factory or such, right? And we go through the, through the components more in detail during the demo, but as you can see there's PT on there, there's, you know, all those components in there that actually makes that a comprehensive solution. Right. And if you think of enterprise context, that's actually what you want. You want the single operator certified operator installed. And then you know that everything is taken care of and you can trust that and, and sort of, you know, build your enterprise software based on that. So next screen pipeline view. This is, this is just our Jenkins pipeline and the blue ocean view. And what it shows you is exactly the error message I said earlier in the middle. You see the CI static image scan is failing. And that's just, you know, show you how it, how it looks like later when you encounter that screen. So the next screen. And then context. So the deploy goes project is, is not a product. Right. What we, what we do, we want to invite everyone who hears about this to contribute and make it a successful open source project. It's, it's, it's a redhead led project at the moment because our consultants sort of across the globe wanted to collaborate. And, you know, that's how it got created. So it's the perfect storm basically for one open source project, driving faster results for our customers without reinventing the wheel. And yeah, ready consulting. I mentioned that the DUD website to align with the white paper and reference architecture. I mentioned that. And then, yeah, the security meeting that I mentioned earlier was basically the starting point to, for me to, to think about it and, and gather around. And then at the moment, it's teams in really in us that talk and communicate and collaborate across that. And I mentioned the laptop in the fault that we didn't, didn't like. And we also have everything you see today is, is in the script. And that's on, on marks sort of GitHub account. And the main component, I mean, there are more components on there as you just see. So the operator, the, the, you know, playoffs project and there is the, the source code management system in there as well. But just as a, on a high level, right? The open shift platform is the user interface that you see on the right hand side. So it shows you all the, the operators that we have here installed. And then, yeah, Plygos software factory operator is obviously the, the main one in this case. And then we're using a market has used record for artifact at the station. And I think that's it. Next slide. Is that where we move over into the demo mock? Yeah. So move over to me. Yes. Thanks, Andreas. So yes, I'm, I'm Mark standing on the shoulders of giants. So I didn't deploy. I didn't create most of Plygos, but I have used it. You know, I'm a long time listener. Not often that I call in, but today I'm going to show you a demo of how Plygos works. I do have. I do have a live cluster, which with all this stuff going, but just because builds take a long time and demo gods are binge full. I do have it prerecorded so we can get through all this in a reasonable amount of time. People can still have lunch. So the demo, there's a lot. There's like a couple of different demos, seven different little snippets. I'll show you. But broadly categorized into three chapters, if you will. So the first one is sort of riffing on what Andreas was just talking about, like, okay, so we have the department of defense. They're talking about things like software factories, and Plygos. How does it make a software factory? It's sort of hinted at that again, with this kind of, this is the developer perspective of OpenShift showing you a myriad of boxes, which is probably slightly confusing. We'll try and make more sense of that in a second. But what you'll see in this demo that I'm about to show you is that you start with the Plygos operator. And we'll talk about how does that even get in your system that can also be available for air gap systems with which Adam Goosens was on this call, who he's done some contributions to Plygos for even being able to support agencies, government agencies that can be connected to the internet. So a little bit closer to a laptop in a vault, which Andreas and I like to joke about. But anyway, so the Plygos operator is in some ways a root of trust, if you will. The idea is that when you create, you'll see me in the demo, create a platform custom resource in true Kubernetes fashion, and the operator will read that. And from that, it figures out what you're going to do with this factory, if you will, to create. So we're not building anything yet. We're just building the things that are going to be building things or verifying the things we're building. That's where the operator will busily work. And you'll see in times that time lapse fashion, it kind of builds all this stuff out. Just because I created one little custom resource, this is all the magic of Kubernetes. It builds these all out in the open shift developer perspective. You'll see that in the future. I think that will work with this particular file, but that is different this year, and it knows really well. So I'll show you what that looks like again, if there are questions, I have a live cluster and all that so we can get into that. But this is a little helpful so we'll stop at start with the operator hub. Important thing here is you see this notion of the provider type. available in a marketplace. And some of them are community operators, some of them are certified by Red Hat, because this is Red Hat OpenShift, what you're looking at right now. In our case, the Ploygos operator came from a separate kind of provider, which I've installed on the cluster previously. So as an admin, I've decided, yes, I want what they're selling. I want those operators, right? This is what our community, as Andreas said, our consulting group put these operators together. And this is the Ploygos software factory operator, which I had pre-installed, as I promised you, on this cluster. If we look at the operator in a little more depth, it looks like any operator, you see the install of the operator succeeded, and it looks for two custom resources, a pipeline, which we'll get to in a platform, which we're gonna look at now. There's a form view for the platform, sort of. I'm gonna use the YAML view. We'll come back to this at the end. So this is basically prescribing to Ploygos what we want it to build, right? That's the operator, that's in the developer perspective. That operator then just starts building things through its blueprint. It knows what it needs to pull in. Now it's pulling in things that go a little bit beyond just the pipeline. Get TUC, pop in there. It's a little bit chaotic. Selenium's coming in there. It's like a thousand little cobbler's elves running off and creating a software factory for me. So this takes, I think it took about seven-ish minutes when all was said and done. If we take a look at it, this is a platform that is revolving around Jenkins. There is also a tecton version because who wouldn't want a tecton version? I'm showing the Jenkins version. That's what I started on. And you can see, like I said, there are elements in here that aren't related just to building. There's also a IDE called code ready workspaces based on a Clipsche. If we look at the platform in a little more detail. So now I'll just compare and contrast the custom resource with what was created. So GitT, you can see GitT is over there. So basically the things that I called out that I wanted for my continuous integration, the things that I wanted for UIT, Jenkins is there. That'll be important. Static analysis, SonarCube is in there. Good on us for including SonarCube. Nexus is in there. So there's a number of things. It's basically, if you will, an implementation or the department of events white paper is prescriptive in terms of what you should do but not what tools you should use. And this is sort of our consulting team saying, hey, within Red Hat, these are the tools that we find are the best practice for implementing some of those different stages that every software supply chain should have per the white paper and per our collective wisdom as a community. So that's the platform. Any questions about that so far? I realized where Andreas and I are kind of dominating. Anything anyone wants to add or any questions, anything that's like totally doesn't make sense? Can you just go back to the YAML file? Just the previous slide. So we're saying here that you can choose these options and if you want to manage, you can get, I'll go see the, is that essentially runs a home chart in the background or something and does all your persistent volumes and everything? That's right though. There is some proviso in there. Like as you'll see when we get to the end of the demo as long as certain things you've implemented certain things ahead of time. But yes, it's been set up so that you can plug and play what you see is best. So it stands to reason, if not Argo, you could put in what's called a step runner. You could put in something that allows you to use flux or something like this if you wanted. In our case, our platform again, just to give you in the live cluster what this looks like, maybe a little bit easier to see. This is the, yeah. These are the different options that this platform, this is what we tend to use in Red Hat Land. GT for an internal Git repo, Jenkins, Selenium. Yeah. Does that make sense? Yeah, yeah, definitely. Yeah, it looks awesome. Cool. Cool. So I think it's cool. Yeah, just, is this specific to OpenShift? It's not, well, this is the thing. So it's not necessarily to be specific to OpenShift but for our Red Hat Consulting they tend to use OpenShift because they're Red Hat Consultants but most of the primitives we're using are generic Kubernetes. If somebody else in the call wants to jump in on that. Yeah, I can add something, Mark. So yeah, so a lot of the primitives as you mentioned are using the, without of the box with Kubernetes, the operator pattern is one that isn't designed for OpenShift. It's one that can be deployed in any Kubernetes environment because it's just running a control loop. It's just some resource definitions that are applicable to any environment. It just comes down to what you want to enable and how. And does it account for, let's say, you don't have enough CPU around in the cluster to install them, does it just blow up or it sort of nicely tells you you've exceeded your limits? So right now it is kind of prescriptive. So we have a bit of a prescriptive stack but down the road we're going to be looking to seeing different ways to provide more capabilities as we mentioned on the CI perspective. We have Jenkins or Tecton. They'll be exposing more options down the road because especially because I'm field facing so I see a lot of customers and some want to use one product, some want to use a different product so we're going to provide better options down the road and provide ways to enable and disable certain features as necessary. And as someone who's experienced a fair amount of failure at the hands of this platform, one way you find out is in typical Kubernetes ways that you see here, the operator has installed successfully. You could imagine a world where if I didn't have enough compute to create everything that the Polygos operator needed, like say code ready workspaces or Eclipse-Chay, this would be, it would be in a reconcile loop and I'd be able to look in the logs. What's that? Sorry, I didn't hear that. Okay, cool. But it's not just like what's Redhead providing, right? This is an invitation. If you guys see something that you want to have as part of the trusted software supply chain or the software factory, then the invitation is that you implement that. I was making a joke. I could say crash loop back off. Go ahead. Yes. Yes, like not being able to pull certain images. Yes, I've seen it all and trying to make this demo. But yeah, as you will see, like if we look at the topology view, it's a number of open source projects. It's not meant to be just Red Hat. Some of them will be things that Red Hat supports again, because it's the outworking of our consulting arm at the moment. But as Andrea says, we're presenting here because we want it to be more community. I mean, this is based on the US DoD DevSecOps framework, which is obviously open source and they have all of these design paradigms in there anyway. So really anybody could take this and apply it to their own system with enough working. Yes, 100%. Excellent. Anything else before we move on to talk a little bit about the pipeline? Oh, thanks, man. Not for me. I'm curious about from a security standpoint. So I think I get what you're doing, but is there security in this? Is there cryptographic signing of things? Are you using something like Intodo underneath or what happens if an attacker breaks in and to this framework here, can they just go and produce whatever they want as part of it? So I can take a whack at that though. I know we have some of those people closer deploy goes. Just first, do anyone closer deploy goes want to take a whack at it? Otherwise I'm happy to answer that. I'll just leave some space. So I'll throw in my two cents in that one. So what I'd say is that Ploegos on its own is not a substitute for any other security controls that you need at each step. And the work that Mark is going to show in terms of the integration with Rekor is I think is good stuff in regards to verifying and attesting the output of each stage of the process and the process overall, but you still want to have suitable controls wrapped around it, which is why it also deploys things like single sign-on for two factor authentication and things like that as well. Okay, yeah, that helps. And I know in InToto, they're adding like Rekor into the latest like ITE things like this. So I'm just kind of, it was just kind of curious because it seems like it would be almost no work for you all to integrate and would give you like a huge security differential. I mean, because like this wouldn't protect against SolarWinds because SolarWinds was bad guys getting into the infrastructure and doing things, but with InToto plus this, then you have at least some hope of that happening. And for not a lot of development effort at all, it should be, in fact, it should be nearly trivial. You could basically be there. So it was just curious about that. I think KeyLine would also play a role into this where you basically start off with boot attestation and make sure that the right amount of operating system libraries boot from trusted sources as well. And that's I think where you would start off 100% secure supply chain. That's right. And yeah, I know InToto, it's not the first time I've heard about it and being close to this project. There's certainly a lot of talk about integrating with it with the North American team. This is just where it stands right now and KeyLine and boot attestation and all that. Yeah, that's, yeah, can be built on top of this. Right. Yeah, that makes sense. And for those who don't know, I'll just say the, InToto is basically a way, it's basically designed to take like kind of cryptographic information about different steps and then let you apply a policy that gets checked over this. And so it's, I think if you put a lot of buzzwords on, try to distill what's done here into buzzwords and InToto into buzzwords, there's a lot of overlap. But if you actually look at what's happening, there's a lot of difference. So InToto is completely agnostic to everything happening in the system. It doesn't care if it has nothing like the functionality here. And so I think there's a tremendous potential because this is a really slick, really like well done, really usable, like high level system that integrates everything together in a good way. And I think you can just sort of get those security properties from InToto with almost no work and really have like the best of all worlds for people using this. So yeah, but this is really cool. Sorry, please go ahead and continue. No problem. You'll see more overlap as we go, right? That there's other places where InToto might overlap. But yeah, so that's the platform, as Adam said too, not a substitute for typical security controls, principle of lease privilege, having, this is all set up to use SAML and all that other good stuff. But yeah, so it's, as you say, it's an implementation of something that anyone could implement, which is sort of the Department of Dependences kind of best practices. But there are still things that are left out, right? We can still implement, as you say, things like InToto. And that's just the platform, right? So that's just creating our factory. We haven't yet built anything yet when we talk about things like solar wind. Yes, maybe we could have had a compromise factory that's pumping out something that is itself compromised. But what we're gonna talk about next is sort of how you might have other controls where you could start to see if something's been kind of tampered with. So the pipeline and the platform, just as we saw with the operator, right? When we were looking at the operator over here, I looked at the installed operators, I go down to my Purgus operator, it pipelines and platforms. It wants to think about the platform, if you will, as the factory. And then a pipeline as sort of like one of the assembly lines in that factory. So what does that look like for us? So we already have the platform, custom resource, the operator still running the background. When I'm not showing a ton of, when there are other demos out there that talk about like, well, how do I make a project that is compatible with Purgus? The barrier to entry is really low nowadays, though it is still somewhat demonstration when it comes to just any random project out there. There are kind of two things that you need. One is your project should be GitOps ready. So this pipeline kind of assumes that you're gonna have a code repo and some sort of GitOps repo or helm repo in this case, right? So that's one thing. You'll see that in the custom resource when we build it. The second thing is that it assumes that you know enough about your use of Purgus that you can kind of see it here. You'll see it a little closer in the demo that you have a Jenkins file that tells Purgus what version of the overall kind of groovy script. You'll see that in a second that you wanna use for Purgus, sort of the thing that binds Jenkins to what's called step runner. So I'll show you through this diagram and you'll see in the demo. I make a pipeline custom resource. That's basically a way to say, dear Purgus set me up if you will an assembly line in your software factory for this project and you'll see what's in the custom resource. The main thing in the custom resource is what the project is. So what the Git repos are and how I want them to be manifest inside of my cluster. For us, in what we do with our teams we wanna deploy it locally to an in cluster GITI sort of a little GitHub that runs inside of our mighty fortress which is, you know, Kubernetes open shift. Then armed with that with the Jenkins file in the project a build is kicked off just like any other build. It's just Ploygos as part of the platform installed the Jenkins kind of main server for us. That main server is told and it's also gets set up on that main server. Hey, I have a new assembly line for you a new conveyor belt, whatever you wanna call it for this reference app code project. And that looks at the Jenkins file like any other Jenkins file in any project which points to a Ploygos library which finds in this thing called the Ploygos step runner which is basically a way to decouple the tool chain if you will from what happens in every given step. So what happens in the steps is in this Python library and each step is a conglomeration of these different kind of plugins. So one is for signing, one is for running SonarCube one is for Maven, right? If that makes any sense. Again, all this is not necessarily telling everyone in the world this is how you should do it. This is just how you could make sense of the demo that I'm about to show you as the different steps of Jenkins is done and these are steps that you would recognize from best practices, white papers around the world it uses differently configured steps in the step runner library and this config.yaml that I call out inside my project I can further refine on top of what the platform has in terms of configuration how I want my asset to be built. So some of that information will come from the platform like where the heck is SonarCube in this factory but what tests I wanna run that might be something that I can configure to the local project, right? So the factory plus the project creates if you will an assembly line which in this case is implemented with Jenkins we also have a tecton flavor for those of you who are tecton minded. So I'll just show you that and then kind of questions. So to make a pipeline to make a pipeline, it sounds like a novel to make a pipeline so I'm gonna move some things around a little bit here there's our factory I wanna create myself a new assembly line so I do that with the pipeline custom resource for our operator and our Ploygos world creating a pipeline. I'm just gonna do a YAML view of this and I'll paste in my stuff and what I wanna point out is that again I say my overall app is called the reference app and I have a reference app code which is at this get location. Again, who knows where this get location is for the demo, it's in GitHub and then I have a Helm, a config repo GitOpsy thing in reference Helm and I want my service name to be called reference app fruit. It's a highly trusted app that spits out information about fruit for some reason it's a demo. So once I create this custom resource so code Helm as we talked about when I go and click create that's gonna create that assembly line so it's gonna migrate the projects into GitT it's gonna tell Jenkins about those projects it's gonna set up a Jenkins job. If I look at the pipeline I can see that, hey, it's already done everything that it needed to do to get to the desired state for that custom resource. I can search for Jenkins in the developer topology view log in using, I'm gonna use the open ship single sign on there is a single sign on that Ploygos uses inside the platform as Andreas was talking about I'm gonna look at the blue ocean view of Jenkins and you can see it kicked off a build for me that was a configuration option the astute of you may have noticed in the custom resource and it's pulling in a groovy script that builds out the Ploygos library. So the Jenkins file has configuration in it that tells how to build out all these stages which is why you kind of see it built out just looking inside GitT, here's the code here's my project, my reference project with the Jenkins information, right? So like I said, it points to something that Ploygos provides so that team manages which is sort of the groovy script that gets driven by these parameters and that builds out a Jenkins pipeline for me based on this information. So that's one of the things is a Jenkins file for my project to qualify to be built in the software factory. The second thing is this config.yaml which is project specific options that I wanna be able to override in building my asset and there was one thing that I'm gonna override for this demo is I've added my own step implementer which we'll get to at the end of this notion of the record log but we'll come to that in a second but some foreshadowing for you. Stick around if you wanna figure out what that record thing is. This is also more foreshadowing. This is the developer perspective on record. I've run it locally within my cluster but it doesn't have to be run here just for the sake of the demo. We'll again come to that plenty before we're done here. Just proving that it's up and running. I can run it locally. We'll talk about what record is, why it matters. Now while it's busy, it's still gonna be if ran those unit tests forever and ever then it here are different typical blue ocean. I can look at all the logs that'll come into play in a little bit. So there are outputs. Every step has different artifacts that it produces which we'll get to at the end. Static analysis, I have output that it produces from that. And you kind of get the idea. It's gonna go through all these different stages. It's gonna push artifacts to nexus. These are typical things to do for like Java projects. Now if I just kind of skip through some of this stuff I create an image, I scan an image. This is the point that Andreas made before where the scan actually broke. And another thing I'll say about having an operator I had to adjust what tests I wanted Ploygos to run. And I wound up changing a custom resource that the operator managed and the operator wound up blowing away those changes. So if I wasn't, if the operator is running and running properly it's there are many different controls to kind of keep things from being tampered with. Which you saw just go by there, we'll come back to that was sort of it writing out what happened in the build using RECR or just a demonstration of how one might use RECR similar to Graphius again as we started to get closer to that. And you can see I'm skipping dev I'm moving to test in prod. So it's gonna deploy my application to test in prod it runs some Selenium test behind the scene. There's a fair amount of complexity that our consulting team has already put into this because these are things that they do all the time. This proves that it actually deployed something out to our production namespace. That's what we're calling production in this demo. And then that's the end of the pipeline all the way at the end, you can see it's meant to deploy this fruit service. Again, thank God we have a fruit service now deployed and safe. And if I just refresh this just to prove the pipeline is completely finished. And this time it was not read not because I was missing some patch because as far as this pipeline is concerned this was a successful assembly run line within my software factory. So a pause there, any comments, questions things people wanna see in the live cluster? It's fine if not, I won't feel bad. I guess I would like to understand a little bit more about override. So in that file there it looks like you have these software factories are top images. Can you override the arguments? Let's say I have a, I don't know, for example a certain Maven Java addition that I wanna use for the build by 1.8. Can I override that anyway? Or it pretty much just comes what you see as what you get. Is that what you get? Yes, yes it does. I'll talk about my experience and then somebody closer to the project may wanna say more. I will say it is 100% possible because I had to do it to make this demo. So if you see that line there I inserted my own container, right? And this is my project with my crappy project I'm gonna put in my own container there and that was a way I could do it. A more, a better way to do it, right? Is one that's less intrusive is changing configuration values as you kind of see here. So I ultimately wanted to adapt what tests I ran with OpenScap and I used sort of their workbench with help from Shane, I think is on this call. Like he provided me this kind of customized version of the workbench to be able to like skip a couple of tests because I didn't want to test those things because it could fail at any time. And that gives an example of sort of light configuration. What I'm doing here is probably ill-advised and requires deeper knowledge of sort of how Ploygos is running. I think most of the time you'd wanna only have to change things like this but that's how the project will evolve to make stuff like that easier. Anyone who's on the project wanna say more about the thoughts behind that? If not, that's fine. But just giving space if anybody on the project wants to- Ploygos applied just, how do you pronounce it? I think it's Ploygos. That's how I've heard it said. Okay. It's Ploygos but it's not as bad as in Red Hat you have Kwee in- So obviously Red Hat has a fair bit of experience with supply trend security. I mean, Red Hat got hacked in 2008. The whole fedora thing. What was the motivation for this project? Like I'm genuinely interested to know that because to Justin's point, there's some similar tooling out here that kind of fits into the puzzle. So I'm pretty keen to understand what was the core motivation to build in something like this. Again, I could take a swing at that but I'd rather someone on the project maybe since we're here. So I'll give it a go. I wasn't there at the start of the project but as I understand it, the primary driver was essentially an identified gap in between here is the U.S. DOD's DevSecOps reference design. Here's the what but not the how. And I guess a desire to have an essentially something that comes in and fills that how so that folks who go, hey, I need to be aligned with Dead Sword can get off the ground very quickly with a tool chain that aligns directly back to Dead Sword meets all of the requirements, et cetera. That's my understanding of essentially why it came into existence because I looked and went, there's nothing that really exists. That makes sense. Have you read through the DevSecOps framework? The Dead Sword, the DevSecOps reference design, I have. Yeah, yeah, cool, cool. I appreciate that response. I've read through it myself quite a bit and done a little bit with it myself to try and implement some of the stuff they have in platform one, for example. And it's very much so this massive resource of the Wild West, like where do you kind of start? So I can fully appreciate that. Just for the dummies, I guess. And this might be, feel free to chime in here, Justin, for in Toto, if it's relevant, but let's just take a scenario where you're a security tool like Nmap and you're hosted on, I don't know, whatever, like Linode or something and some big, bad cyber gang. Roots one of your boxes and they pop a VBS and they've got Root and all of a sudden, they've got access to your software and they re-upload a version that has some malware hidden in it. Like, how does this protect users from that particular problem? Because I mean, that's ultimately the goal of software supply chain security. Like Matt, when you say they upload your software, what are they uploading? Like, they're breaking into your... In the past, you download, you know, something like Nmap, right? And you can enter five checks some, you can make sure that the char is valid against what you're actually downloading. You know, it's a particular piece of software that was compromised. And, you know, when it's after uploaded a malicious version of that software, you know, the internet, how does this protect users from that? So, kind of, as I was saying before, there is always the need to have additional controls over and on top. But where this can potentially come in is, and I've not kind of touched on it, is because there is this decoupling between the pipeline and the tools, it becomes possible to write and inject new, I guess, implementations of steps in there. So let's say, for example, that we've got a Git step implementer, which is basically, hey, check out the things. There's no reason that couldn't be extended or enhanced to also include verifications of the committer and folks like that. So it doesn't have the tools out of the box today to do it, but it could be built and added in as part of the pipeline. So it's really just a framework to stitch other stuff together. To do something in an opinionated way. Yes, because, yeah, if you start from Dead Sword, the what, not the how, as Adam said, we're starting to fill in the how, but then questions of like, well, how do we really, now that we've got a bit of a framework, how do we further harden that framework and how do we make it customizable? Because we're trying to balance the interests of security. Well, we want everything to be repeatable, but tool chains tend to be very snowflakey in the real world. So how can we kind of balance those two interests? That's what you're kind of seeing emerge here. Yeah, that's what I'm going to share it all with you. That's a very fair call. Okay, well, look, it's pretty polished so far. So well, it's done. Right, polished. Yes, I'm glad it's coming across that way. Together in a polished way. I got it, I got it, but not. It's got pretty pictures and... Scuttle? Yeah, well, some of that is just open-shift, open-shift, I know it's very... We promise it's not glitch to kind of take, we promise. And just a quick question. In terms of the open source part of this, so high level, what I've seen today is there's a few layers. So we have the container, we possibly have the Kubernetes deployment and then we have the framework itself. Which ones could I go in today and maybe make a PR against? Can I do it at the container level, the framework, or all three? All three. Yeah, go ahead, Adam. Yeah, so everything for this is up, I was going to say upstairs, upstream in GitHub slash Ploygos, all the container definitions are there, the workflow definitions are there, the operator definitions are there, all written in Ansible. So, yeah, it's all there and ready to accept any and all the rest. I've actually had a look at that, Rico, and it's all very biased towards Red Hat opens. Are there plans to make the more generic vanilla Kubernetes flavor? We let the community decide. Yeah. You know, it's up for our customers. So obviously, we're building on technology. We know the source code often, we know it works. And one, like being conscious of time, I really would like us to demo the record part of it. That's where the signing and the attestation basically comes and if that's okay, Matt. Yeah, I'm okay with that. But yeah, let me get to that before we run out of time and then questions at the end. So, Wrecker. So, Wrecker is a part of SIGS Store. So there's a separate group of projects, Cosign, Wrecker, a bunch of things and there's a lot going on in the space. They're in Toto kind of friends. In this case, what we're looking to demonstrate is just as you guys were pointing out, hey, this framework is great, but there were all these things you could add to it. We decided like, hey, let's try and add something to the framework and Wrecker seemed like a thing to add for multiple reasons, which hopefully I'll get to before we're done. So again, what we're doing is trying to write our build activities to a tamper evidence store in a way that may have been something that could have helped in terms of software supply chain attacks like SolarWinds, I understand it's not a remedy for it, but being able to see, here were all the steps in the chain that went from this git commit to this container build, let's say. What if we had something that could immutably attest everything that happened now that we have our framework? So dead sword, the how, the framework is the what. Now that I've made some opinionated calls about the what, it lends itself to say, well, what if each of these steps someday had to write out the output of each of these steps into an immutable database so that auditors or other things could automatically check that. So what I'm demonstrating here, and this is just a demonstration, this isn't exactly how it would be implemented, is I'm saying, when I go to sign the image, I'm also going to do two things. I'm going to store the signed image in record to say, I the build chain, I'm going to use my keys to sign it, I the build chain made this image. And then I'm also going to add a node, we'll talk about that in a second, I'm going to record also in record artifacts that went into building this image. And again, stored in immutable database, in this case, trillion. So transparency.dev, sort of Google's trillion project, open source project. We'll get to build nodes in a second. So let me just introduce record really quickly, if Zoom, thank you Zoom. All right, here we go. So Zoom was just providing me from showing you this video. So where we last left our heroes, this is the thing I wanted to show about signing the container image. So what you see here is I've got private keys that represent the tool chain and it's using those keys to sign images. So I sign a container image, which is kind of what you see here, using a Ploygos' key for the factory. It signed an image, that's cool. That'll be important in a second. It also stores that image in Nexus. So it's internal, it's using sort of an internal using Nexus as a container image registry. And then finally at the very end, this is what I was showing before, where I put two things in. When I go to sign the container image, I podman sign, I curl push to Nexus and I use my record log. I log to my local Kubernetes local kind of record instance. I log two entries. One is sort of the last build node. We'll talk about what that means in a second and there'll be some command line here. So you see the record URL has a service level address, service local address. I need to turn that into a public address just to prove that this is my record server. I have it exposed publicly. Again, think of record, how we're using it right now is it's a little bit like Graphius. I don't know tons about Graphius, but it's meant to kind of do that. Graphius meets trillion, let's say. And it'll become more apparent as we go on. If I actually look at the entry that the build chain wrote to record, what I say is, oh my God, Jason, so let me just make it a little prettier. There's a body of UU encoded data, but there's an inclusion proof as you'd expect from immutable databases that says, okay, so I've got a database there's a Merkle tree behind the scenes. The body is interesting, we'll get to that. The thing at the top is the UU ID of the leaf node, which record uses to find entries. If I look at the record command line, there are things I can do with entries in record. I can verify those entries, Merkle root, inclusion proof, all that. I can get those entries, I can search through those entries, and I can upload. Uploading has been done by the build chain. Now, if I go to the next, I think I'll just jump to the next one. This is gonna talk about build nodes. So, and again, the question's right at the end, just for the sake of time. So Adam and I got to chat about this. This was kind of fun. So here's how we imagine something like this could work, both for attestation, but the auditors and all that. And also eventually you'll see at the very end a way we can hook into maybe things like what the Toto is doing or Krita does or all this kind of stuff. I built an image with a tag. You could imagine I could build an image and make an alias of that tag as the UUID that represents the last thing that happened in the build chain. So that last thing is sort of my first build node. That first build node could say, here's the step. So that first build node is sort of saying, this is the culmination of the whole build chain. I built artifact X and the previous, if you wanna see what happened before, here's the previous entry in record, which is itself a build node. So sort of a link list inside an immutable database. And this thing would be all hashed in the same way you'd expect from immutable database with its step name, with whatever output is relevant to it. Maybe it's the unit tests or pick a thing or maybe it's a static analysis with a entry to the previous step. And so on all the way down to eventually the previous UUID would be zero. And thus you'd have a way of creating the whole provenance, everything that happened from checkout from there all the way to the creation of an image in an immutable database that you could verify for yourself. So anyone anywhere could look this up as well as for the sake of auditing and all this kind of stuff, right? So again, let me run through this and then we'll get to questions at the end. Just to give you more sense of what record can do, I can look up the last entry in record by saying get me that UUID, get it in JSON and use JQ just to kind of format it so we can make sense of it. This is how record stores things in trillion, right? So it's got a body, you can change how record stores things, that's outside the scope of this, but the things we care about in its data is this extra data, which we'll get to, the signature is stored in record, the public key is stored in record, as well as UUID. Record ultimately wants to be also a PKI solution because with the immutable database, I don't need to keep those keys forever if I can prove I had access to those keys at the time I sunk it into the immutable database. But see, SigStore, the record project to learn more about that. I'm gonna look at that extra data to pull out my build node. When I pull out my build node, I see that the step in question again for the demo, there's only one step that's signing anything, it's that sign container image step that you see there, so in this step, and that matches that string exactly, that's what I wrote out. If I said my step was sign container image, my step output was base 64 encoded, but we will see in a second that that is exactly the signature that was stored, and then finally the record ID. So what I've done is I put into record proof that if the image was signed by record, now this whole build chain, this chain of what happened, the little goods on the assembly line is now all signed in record. So I have another way of verifying that this, the build chain did what I thought it would do. In terms of verifying, there's a number of different ways I can verify, I may skip through some of this, you can verify by artifact, you can verify by public key, right? So if you only have the artifact or you only have the public key, you need the artifact, the public key and signature, if you didn't have the original entry, that's kind of what I'm gonna show here for the sake of time, I'll let it go for a second, just to give you a sense, the artifact in our case is the last build node, the last signature entry, I'm gonna pull that out of record, I'm gonna look to record to get that, I could get that somewhere else if I wanted to, but I'll pull that out of record, so that's the detached signature. And the last bit is a public key, and just showing that I could get that from anywhere. If I had the public key in a secret, and again, I'm just using the OpenShift UI to quickly navigate to that secret, right? And again, demonstration, I probably wouldn't keep the private key there, but if the public key is right there, I can say, hey, play goes public key, I'm gonna paste that in. And now what record can do for me is not only verify the inclusion proof, but verify that this artifact was definitely signed with this public key, and this is a signature that goes with that artifact, and it will do all of that. And it says here, yes, it was, otherwise it wouldn't have returned at all. So it says what the hash is, it gives me the tree root, in case I wanna do my own kind of inclusion proof. And in fact, the CLI does its own inclusion proof locally based on the shahs from the tree. Current tree size is only two, because I've only put two things in, because this is a demo, and this is the first time I ever ran this. You can see I was, this is the calls that I was making to my local record. Again, not so interesting, just proving that I was calling record inside of here, there's also a public record instance, which I chose not to sully with my demo. You can also search through record, again, just to give you a sense of what record does, I can search by public key or by shah. So if I wanna take my artifact, which is the build node, or in this case, that build node that wraps the signature, I could search for it that way, search by shah, which is what you kinda see here. And I get back the UID, I expect that UID again, for the sake of time, let's get through that, the UID does match what I had before. I can also search based on the public key, so I could take the Ploygos public key and say, hey, let me see all the entries that Ploygos has made, and there's two. Our build, our link list is only two. It's the one at the end with the container signature, and then there's another one, which has the output of the build, right? And that's sort of what you see here. Just proving that there is a public record server out there, there's no reason it has to be local to my cluster. In fact, we want it to be transparent, but just proving I didn't write to that record server, and I may have the tree root changed, I could know that, oh, this isn't the tree root I expected for anyone who's verifying this. And then finally, this is the last bit I wanted to show you guys today. Let's play with these kind of build nodes. I told you the last build node is what we said was the signature. In our case, it'll be the signature of the output, the container output, right? So let's just see if that matches. Can I take a public key, which is the Ploygos public key? I have it locally here. Can I basically get that signature file, right? Just proving that the signature file, that this thing is signed by Ploygos. When I then go to decrypt it with GPG, right? I download it, and then I JQ. What I should see is, sure enough, it was signed by the service account, again, the tree, again, a demo, we don't have the verification of this key, but we know it was this key, regardless of who owns this key. I can also do the same thing with the signature content that was in the last build node. And using that, I should be able to decrypt, I should be able to use the signature content. That signature content should represent the signature that was at Nexus, blah, blah, blah. If I output that, what we'll see should match this up here, which it does. You can see it's the same thing, it's the same build. So these are ways that you can start to do some verification. The other thing I wanna show is, if I go down through my link list in the tree, I can get to that previous build node, just by looking at the last build node, pulling out the previous ID. And again, you can imagine wrapping this in some sort of CLI. I can get that previous build node and then look at its extra data, which will show me the step name, some content, and the previous record ID. And that's what you see here, although the step is much bigger. Again, same step because it's the same step that I ran this from, sign container image, but that represents this first thing that I put. Sorry, yeah, yeah, this first one that I put right over here, I highlighted the wrong one. This first one, which is the build output, right? And so if I decode that, which I'm gonna quickly just do right now, following the same things, previous record ID zero, that means there's no further entries in this chain. If I take the step output, you see the step results from this build. So this is just an XML file that gets produced by Ploygos. Again, this is just a demonstration, but it kind of gives you an idea of the kind of things you could have per step and be able to unpack. And that's where I wanna start. The last thing I'd say before we end our time is a future demo. And I think there are other people trying to do this, but you can imagine I didn't get time to do this. Integrating with things like OPA or gatekeeper, which brings OPA to Kubernetes, you're gonna imagine a world where even a simple admission hook that does a record verify on a UUID of an image in certain participating namespaces would allow you to follow this pattern that you see from the transparency.dev website about assume this is Kubernetes, this is the deployment, you could have the web controller kind of checking, oh, hey, is this an image that I know has verifiably been built by my tool chain? So possibilities for the future. That's the open policy agent, just in case you... That's correct. At the end, we kind of threw a lot in all of the end. Sorry, we probably left too much time for questions in the middle, but yes. So questions now for the last three minutes that we have or anything that you guys want to do. Sorry, Matt, for cutting you off before, Matt, but I thought it's important to... No, no, no, thanks so much, guys. That was amazing. That's a cool project. Yeah, I like it, because I'm sort of building like a common patterns like with myself and it's like, why am I doing it by myself? If I can do this or something like this, right? So is it like a slack group or something where you can be more involved or how do we... What's the next step? How do we engage? That's a great point. There is one for record. There's a slack group. I'm not sure there's a slack group for Ploygos yet. Adam, do you have more on that? There's no slack group for Ploygos yet, but to be fair, we probably do need to set one up at the moment, the best place to engage is on the kid hub records. Yeah, yeah, okay, cool. Awesome. So shame that we're out of time, but I did want to get Justin's view as well in terms of solar wind and supply chain attacks and how that would help. So ideally you build your Ploygos operator as well through a software, you know, to the factory. But the point is where do you start, right? And would that ever be possible without actually having full control and understanding of the source code that is used to build whatever it is, whether it's the operator or the software, right? So Matt was saying, what if someone uploads, dodgy sort of components and I think the record sort of Merkle tree addressed this a little bit, but where do you realistically start? Right, so yeah, and Intodo has different ways of managing and handling this. We actually worked with Git and so they've redone parts of their signing scheme because we found design flaws in the way that Git signing worked for Git tags and other aspects like that. They actually used Santiago's code who's the lead of the Intodo project and a design that he and I and some of our collaborators came up with. So yeah, there's like a bunch of stuff that exists that already does this in the like Intodo scope because I think that project's been working at this problem from the different angle where we've been very security focused from day one and that's been really the sort of primary thing. And also we've been I think very vendor and technology agnostic. And so like I'm really impressed by what you have and really I'm looking forward to digging deeper into like some of the different pieces but it's also a little in some ways hard for me to understand because I don't have the, there's feels like there's a lot that's very vendor specific here and it's hard for me to disentangle some of the security properties from some of the other things going on. But yeah, it was really enlightening and really enjoyed learning about it. Thank you, Justin. Matt, do you wanna close the call or are there any further questions? Yeah, sure, sure. Did anybody else have any other questions? No, thank you very much. That was awesome. I really enjoyed it. Great, well, look, if you guys wanna get in touch with Andreas, he's on Slack. So feel free to reach out to him. You can check out the GitHub repo and thanks so much everyone from Red Hat. That was really enlightening. I really appreciated that. And yeah, thanks so much. That was really cool. Awesome, thanks for having us and see you at the project. And yes, if you have anything in terms of extensions that makes it just as applicable for any other platform, absolutely, right? So that's exactly the point behind an open source project. So thanks a lot for having us. Yeah, I think, look, as I mentioned, I took a quick look on the call at the repo. Obviously, it's just available for OpenShift right now. But from what I read, it doesn't seem too hard to get this running on vanilla Kubernetes. So I'm gonna have a little bit more of a play. And yeah, thanks for sharing. Hey, JJ, are you still here, buddy? Is there anything important to touch on while you're human? Yeah, thanks for putting this together. This is an awesome project. One thing I would suggest is, try and see if we can pull together a demo that has, doesn't have anything that's OpenShift or Red Hat specific. It'll serve a few purpose in terms of trying to get wider feedback and adoption. And for a project that's as useful as this, I think it'll also be useful to see how this plays in well with other open projects that we have, like what Justin was saying about Intoto and stuff. So I would, I mean, if you're interested and if you're curious in getting more community involvement, I think it'll be a useful thing to do. That's not a demo. That's not too under specific. But otherwise, it's awesome. It's a good learning for a lot of folks. And I'm pretty sure it's going to be useful for the community overall. Thank you so much for putting this together. No problem. Thanks for the feedback. We're definitely going to take that into account now and hope to come back in the future, obviously, to show how the project has evolved. Awesome. Thank you. Great. Is there anybody else on the call want to say anything before we close? I think we're pretty much running out of time now. Yeah, I guess just what we briefly touched on at the start. I'm going to keep going. Yeah, I'm like, if anyone wants to maybe hang out for 10 minutes or something, sometime just let me know. Yeah, for sure. That is a cool one. Yeah, have a discussion on one of the key talks that we go or maybe at the end of it, we can also do key takeaways and have a talk about it or something like that. It just makes it nicer being not in that region because it's quite an exciting event for me. So it'd be nice to share that with people and talk about it. Yeah, I'm down. Right. Cool. Cool. All right. Well, cool. Cool. Well, hey, look, just for those of you in Australia or even APAC, I guess, it's pretty relevant. Just to let you know, Brad and I have been in touch with Bill Mulligan from the Linux Foundation. And we're going to be spinning up Kubernetes, like the KCDs, the Kubernetes Community Days here in Sydney, where we're reaching out to different vendors for sponsorship, et cetera, at the moment to try and get things organized. There's a very open invitation to anyone who wants to get involved in something like this. It's a massive undertaking. For full transparency, the Kubernetes forum 2019, to date, the largest conference that's been hosted in Australia in the cloud native space, that's been indefinitely canceled. And it's the view of the CNCF and Linux Foundation that this will replace that now. So yeah, look, if you want to know any more information or if you want to get involved, please just ping me because Brad and I would really appreciate any help or support you could offer. And yeah, other than that, we'll keep you guys posted. And yeah, all the best. We'll chat to you guys soon, I guess. Have a great day. Cheers, Tim. Cheers. See you, mate. See ya. Thanks, everyone.