 much. There we go. Recording in progress. Don't say anything stupid, Steve. All right, let's get started. Supply chain security. I have reduced that rather long title in front of app code cloud to just cakewalk. Make it sound easier. Make it sound more friendly. Put some numbers in there to make it sound a little bit more security, because everything is supply chain, like everything, seriously. But I'm going to try to go through ways in which you can get started with making sure you're doing, you know, at least the bare minimum with your supply chain. And I will go code cloud essentially. And I'm going to emphasize a certain phrase there at the bottom, which is be deterministic over probabilistic, which is maybe an interesting way of approaching the way we're going to deal with security, you know, across the board. All right. And by the way, that is a real undocked photo of me in a chef outfit. No Photoshop at all. Let's move. I'm going to be doing the odd little demo-y thing, you know, where it feels appropriate. I've got a few tools set up just to show you, you know, when I say something's easy, you know, I'll put my money where my mouth is and show you just how easy it is. This is me. That is actually the room I'm in right now, just a different angle. I got Chewy over here in his anonymous mask. Hard to tell. He thinks people can't tell who he is, but we know. I'm a developer advocate at Bridge Crew. I'm a DevSecOps enthusiast for what that's worth. The evidence of which I run the DevSecOps London gathering meetup. So if you happen to be in the London area and you're looking to get more DevSecOps because you haven't got enough, check that out on meetup. I'm a big Raspberry Pi geek. I've got a bunch of pies you can't quite see behind me on the shelf that run all sorts of wonderful CICD experiments. And I've worked in a lot of places, but I've landed at an awesome place called Bridge Crew by Prisma Cloud. I do a lot of other things. I've got a Twitch show whose name I'm not going to say, a podcast. I'm a beer taster. And I, you know, if you just want more of me, which by the end of this, you won't, there's lots of ways to find out more about me. I'm very friendly on LinkedIn. So if you want to connect, go for it. I'm very easy to find. Thanks to my name. All right. Let's, pardon me, set the scene. Security. And I'm going to, I'm going to put this in the context of more traditional InfoSec. Just to try and get your heads around, if you're not in security, where, how do they think? What are they doing? You know, why are they, why are they so much trouble? Security has a tendency to look at the output of what we're doing with security and they see noise is good amazingly. Now, some of you probably like, no, I hate noise. Well, we all hate noise. But the, you know, the opposite of that is dangerous in many cases. They like to see more information coming at them. And then they throw all that into some kind of processing system, maybe with some AI, that's very trendy, or some ML or whatever. And then they try and produce indicators of compromise, or they try and create a signature or detect anomalies or look at runtime. This is this form of advanced monitoring is, it's, it is kind of late in the game. But the opposite of data means we might miss something. So they consider that to be dangerous. So they're constantly in this cognitive overload state and trying to look for ways to solve that. So let's keep that in our mind. The other struggle that that security has is whether they're looking at a primitive solution. You know, if I didn't say advanced solution, I said expensive solution that tends to be the finance. Maybe there's an open source tool, and I'll be talking about a lot of open source today that does some pretty cool things. I mean, I would be, I would be, I would be lying if I didn't say that I work on open source tools, like check off is what I'm going to use throughout. And it's great, but scaling it to thousands is not easy. Now, obviously, you can pay for big solutions that do a lot of things for you. But then you're looking for return on investment. This is just more cognitive load for them. The result of what happens is that we, they have a tendency to let that overload make them more reactive than proactive. And what I'm going to be siding towards in this talk is how to be more proactive, which is good, and then easy ways in which to be proactive. Probability versus certainty. If we're looking for things, and we don't know how to define them, that's probability. If we can provide some sort of determinism, then we can err towards the side of certainty, and we can do less. You're always going to be doing some probability, but as little as possible. That's what we're going for, right? All right, I'm just going to define risk as likelihood. Risk is likelihood times impact. Sometimes you just need someone to say that. Some might argue it's an oversimplification, but it's a very handy one. Because if you look at everything you're doing from a security perspective, and you can usually categorize it into, am I reducing the likelihood? Or am I reducing the impact of my action? Or am I doing neither? And should I just scrap it? And that's a good thing. That really does help us define what it is we're going to be doing in the way we're taking action. All right, we're going to shift now to the everything as code. Everything as code is great, isn't it? Imagine if we did everything as code. Huzzah! We're done. Are we? Are we done? There's a lot of options for everything as code, isn't there? Standards are a thing. So many standards. CloudFormation, Terraform, Palumi, Cluster, Cluster, API, YAML, CTK, JSON of the originals, JSON and YAML. Yeah, the foundation. Helm, Ansible, Python. I didn't do Python in there. Bash, scripting. There's no end. It just goes on and on, chef. There we go. There's no end. There's so many ways in which we can do something as code. How do I pick? What am I choosing? It's good. There's no wrong answer. Everything as code is a good thing. It's reproducible. It's a wonderful place to be. However, we can make some smart moves when we're talking about everything as code that's then makes security easy. Sometimes we just, you know, ops, people can just get run away with everything as code and choose something that they like. But actually, as security people, if we can influence and say, well, why don't you use that one instead of that one? We might be giving ourselves a real leg up in terms of security. Here's the quote. Security needs a codified way of describing what we are about to do as opposed to monitoring what we did. It's a nice quote, isn't it? You know who said that? Albert Einstein. No, I'm just kidding. It was me just now. I just made it look like a quote, so you'd pay attention. But I think it's a good statement. I think it, and it's something that we're going to kind of, this may be thematic throughout this presentation. Okay, why as code? Why do we want everything to be code? Well, click ops. I don't know if you've ever heard that before. It's kind of a bit of a, you know, pastiche, but making fun of the tacking ops onto everything. Because if you put ops on the end, then that's be a good thing. Going into a console, gcloud, AWS, and just clicking and provisioning and click, for one, the interface changes often enough. It's super complicated. And I'll be darned if I can remember what I did last time. So it's bad. The idea of doing anything imperatively, which is the opposite of declaratively, is difficult. It's difficult to do twice. Even if you make a recipe, you do a bash script and you use an API, understanding all the different API APIs, even if you look at AWS, there's just, it's not consistent. If you've got any experience with that, it's just too easy to make mistakes. You have non-repeatable results. You may not have change control. A lot of ops people just have scripts sitting on their desk that do things, certainly back in the old days, you know, that was very common. And other reasons, like humans, you know, humans, you and me, us, we're not the smartest people in the world. Well, we are the only people in the world, but we don't, we always make smart choices. So let's try to automate away our stupidity if we can do that. All right, let's try an inverter thinking, and this is where we're talking about the heading towards determinism. Let's reduce likelihood. Employee checks earlier and often when people talk about shift left, it's a bit of a cliche for security people. For some people, it's new. Let's scan our anything as code for problems or misconfigurations. If it's as code, well, maybe I can determine what's wrong. So I'm not doing that reactionary monitoring thing, right? That would be good. I can use the concept of infrastructure as code and I can employ its context when I'm looking at things like vulnerabilities. Is this vulnerability in a vulnerable place? Is it deployed in a vulnerable way? Can I get a visual of what this looks like so I can see if I've deployed something and that can change our definition of risk. We're talking about likelihood versus impact. The context is king, reducing impact, speaking of which, leveraging chaos engineering. If your Netflix is very famous for kind of being founders of the, with chaos monkey and destroying things, the idea of chaos engineering in our world, although it's existed for years, like in other industries, we're just as usual copying what other people do. The idea of that's what that cow is not happy with this analogy. Cattle versus pets. This is important. We should consider really in a cloud native world that everything is ephemeral. That means it's nothing persists from the tiniest container to the biggest cloud. I should be able to see an anomaly in a worst case scenario. Things are being weird. I have my entire state from code to cloud in git and I can just say deploy and everything gets put back to the way it was. That is amazing. Now, it's very hard to do easy for me to say, right? Of course, start small, but work your way up towards that kind of world so that if I didn't destroy things, then I can make them come back. Kubernetes itself is designed around being able to destroy a container and have it, you know, have replicas, have services that feed different replicas, load balancing, all of that stuff's built in. So this is a wonderful thing for us for security, but we need to be able to dictate state. All right. This is lesson one, part one, part one in the importance of supply chain context. I told you it was a beer enthusiast. This beer right here, this beer right here, is an imperial step. If you're unfamiliar with what the word imperial means, maybe you're not a, maybe you're a Budweiser kind of person. You're like, I don't know, do we have imperial Budweiser? You would probably like it if it existed. Imperial means it's strong. That's a 10% stout there, a bit of a bit of a meal. And you can see in the description that I took off their website. I changed the percentage to CVSS scores, the common vulnerability score there. And that's high. That's the highest you can get. 10. This looks dangerous. I would not, I would not, I would hesitate to try one of these or I would do so with caution. I'd be sitting down. Right. So we need to get context in its single form. You know, maybe at a relaxing evening, that would be just fine to have one of these. If we check the deployment of this particular can of beer, let's check it out. Eveningdrinks.yaml, this patterned after Kubernetes YAML, if you're not familiar. We can see all the labels. We can see the dinner was salad. I would not have a salad. Maybe a pasta before this. Three replicas. Watching Amazon Prime, other services are available. The surreal show American Gods is weird. And I had three and 45 minutes of this game. And it didn't dry January. Oh my goodness. This is awful. So this is like an example of the worst possible deployment, which amplifies the risk of this container. See what I did there. Waited a while to bring the container pun in there. But that's the idea. This could be harmless or it could be terrible. Containers can scale. Images can scale to create an impossibly possible to remediate attack service. All right. That's the first time I do this. I'm going to do one later on too. Getting started with GitOps. Yeah. GitOps is cool. Infrastructure is code plus GitOps. That is a solid step towards security. And yet it all sounds like ops, right? It's not really on the radar of security. If it's not in your radar, it should be. This clever individual, Victor Silva, immortalized himself on the Weaver website with this GitOps in one slide. And it's really small to read. So don't worry. It's exactly the text I put there. Git is the single source of truth. Git as the single place for change happens. Runtime is now, from code to cloud, is immutable. We're not changing anything. We're never going into the console. I'm not going into Kubernetes. I'm not exacting into a pod. I'm not doing all that. If I need to make changes, I make the changes in Git. And I use something clever, like Terraform or like Argo CD to push that out into my environment so that the changes only happen in one location. Everything is observable. Everything is verifiable. Awesome. Now this springboards us into security very nicely. I'm going to go back into and talk about the term. I've talked a lot about infrastructure as code. And I give you some good examples there. Whole big list, right? And I've talked about deterministic and I've talked about declaratives. Let me give you a very, I think a very simple example of what the difference is between declarative and just say not declarative. Procedural imperative, very similar. Say I need to bake a cake. This one will bake a cake for me. It's written in bash. I'm going to preheat the oven. I got my oven temperature. I haven't gotten any layers yet. So while layers are less than or equal to two, I'm going to mix some flour, eggs, sugar, style equals fluffy. Oh, yeah. Bake time 30 minutes. Repeat. I've got my layers. Pretty awesome. Nothing wrong with that. That used to be everything. The way everything was done. Let's take a look at this declarative example. I could have chosen Terraform here. I could have chosen anything, right? The cloud formation templates doesn't matter. I could have done it like that. Essentially, this is more like being in a restaurant. This is like saying, I would like cake, please. Two layers and I trust, you know, my favorite, my favorite chef, fluffy sponge. I get chef fluffy sponge to make my layers wonderful. I don't want to know how to make a cake and I'll get a cake. Both results are the same. In one instance, I'm relying on somebody else to make sure the state is retained and provision it and make it happen. So I get the cake. In another instance, it's the instructions. It's a recipe, essentially. Now, there's two big differences between these two. One, there's this concept of item potency, which is a strange sounding mouthful that essentially means if I run the script, does it change? Like, do I have the same result? And let me just explain. If I run the bash script twice, I have two cakes. If I do the declarative version, I have one cake because it's defining state. If I already have a two layer cake, it just goes, yeah, you already got a cake. What do you ask me for another cake for? If there's no cake or there's one layer, it makes a second layer. So it's just a definition of retention of state. It's quite nice. Also, you can, if you're a security person, you're thinking, I like that. The other one that's different is that what if I get rid of the picture of the cake? What am I making? Just for a readability perspective. The declarative is nicer. I know I'm making a cake. Over there, I don't know if I'm making muffins. I don't know what I'm doing. So there is kind of that element of it. I'll do a little name drop. Conk, let me just do that. I know I'm doing it. Kelsey Hightower, this is a quote from 2018. So kind of going way back again. He was praising GitOps. But what I wanted is the second sentence that I actually would like to double down on declarative configuration is the key to dealing with infrastructures code. Yeah, it's a big deal. It sets the stage for new tools. Now, I've already lately touched on checkout. That is a tool for scanning infrastructures code, but specifically declarative infrastructures code. And it made security in that place easier. So that's really quite cool. So let's just keep that in the back of our minds. While we introduce the bracket supply chain, layers of our delicious cake. See, I'm still on the cake, still on the cake. You'd think I was obsessed with cake. But that's a beautiful thing, isn't it? Look at that. So what are my layers here? I'm going to be going through layer zero, which I consider layer zero the foundation. It's the buttery biscuit base of our cake, right? You got to have a cloud. If you're cloud native, there's some cloud. It's going to be the first thing you do. Then we're going to switch back to kind of an app sec perspective. And I'm going to look at the application. You're writing some code. You still need to, you still need to write secure code. You know, that has never changed. That's something we've been good at is writing good code. So I think as we move to microservices, we're even getting better at it. It's later to where it gets complicated. The application's friends or dependencies where things have gotten more complicated lately. That becomes a part of our supply chain. You can see how it all connects. Everything has dependencies. Even the cloud itself has dependencies that can report serious vulnerabilities that we need to be aware of. So our supply chain is so complicated. It's like a giant spider web. We're going to talk a little bit about the image itself, which, you know, we're building images and we're pushing them into our pipeline and we're wrapping them all up in a gift basket deployment and we're sending that into runtime in our pre-prepared cloud. Our application goes into our image. But there's the pipeline itself. That's gotten a lot, a lot of attention in 2021. We used to kind of, kind of ignore it. And now it's like front and center. All of this is supply chain. The pipeline is not supply chain on its own. Some people make that mistake to think of pipeline supply chain. You know, it's pushing everything through. That's where everybody comes together in a little party and when it creates the application that's going to be put into runtime. No, it's just an enabler. It's just an automation system for our supply chain. Our supply chain is vast. I'm going to be going through each of these, talking about the what, the why and the pros and the cons and doing a few little play things on the way. Okay, securing the base. The very bisque cloud. What are we doing here? Well, ideally we talked about infrastructure as code, right? We're talking about everything as code. So what we're doing is we're using infrastructure as code for provisioning. Exactly. We're not doing clicky ops. We're not doing any of that. We're going to choose an infrastructure as code type. I'm biased toward Terraform because it's kind of cloud independent, right? Cloud formation is pretty cool. It has its advantages. No doubt. I don't want to get into that argument, but Terraform just works on all the clouds, which is good. And it's, it's stateful. And it's declarative. Brill. It doesn't matter as long as you're using something that would be great. Then I'm going to start using infrastructure as code scanners for my pre-flight checks. Not too bad. There are lots available. I've got a logo up there for check off. So when I work on, there's kicks by checkmarks, their tear scan accuracy, which is now tenable, I think. There's a bunch of options out there using none of them as bad using one of them. Great. Using two of them. That's a bit overkill. All right. But I, I think there's a real opportunity to be able to do that. And it's super duper easy, right? If I were to go over here and take a look at, just to give an example of check off, check off is a command line tool. It has GitHub actions. It has an admission controller now for Kubernetes. It handles all sorts of different languages. It's just really super easy to run. I mean, in fact, you can just, it's as easy as doing dash F. I'm actually in the Kubernetes repo right now. I can run base deployment. And that's it. I just said, this is the file I want to scan. I didn't even tell it what type of file it was. I didn't have to. And it comes in and it says, this is, this is what's wrong with this. There's a good question in the Q and A that I'm going to get to later on. So just trust me, Diego, I will get there. It's about plumi versus Terraform. So I'm going to touch on that. CSPM. So once we've done a good scan of what are, to make sure we're not deploying anything that's vulnerable, how do I know that it's still okay, right? There's these tools called CSPM, Cloud Security Posture Management. It's another Gartner acronym. Somebody once, I was recently in a conversation where someone said, every time you say an acronym, the developer dies. Maybe not, maybe not untrue. Cloud Security Posture Management is the ability to monitor your, your current cloud runtime. There's lots of different ways to do it. Bridge Crew's a good example of monitoring runtime and actually synchronizing it back with the infrastructure's code GitOps version to find drift. That's pretty cool. But using something is always really good. There are free versions. You can see I've got a version of that. So you can get into it's free. AWS and most of the clouds provide you something to look for misconfigurations. But if you can tie it back to Git and play GitOps, that's what we're trying to do here. That's what we're doing in terms of securing the base. Why are we doing it? Well, we touched lightly on it, right? ClickOps are bad. Humans are bad. We're creating code. Anything that humans create should have some form of, some form of automated verification. The pros, the pros of doing this infrastructure's code controlled and observed change workflow. Boom. GitOps. That's why we want to do it. We talked about chaos engineering. Yeah. Infrastructure's code of any kind is chaos engineering friendly. If I don't understand what's happening in runtime, I don't know. Just redeploy. Do a terraform, reapply. Reduction of dependence on tribal knowledge. And that's kind of a big one. I saw an entire talk based on just reduction of dependence on tribal knowledge. No surprise it was done by an SRE and huge fan of infrastructure as code and GitOps. No surprise. What are the cons? Well, infrastructure as code is code. And so does anybody write it from scratch? No, I don't. I go get modules from the terraform registry. I go get, I go get helm charts. I, I do everything that, that everyone else does. And if you look at, actually we did a, we released some research just last year talking about the percentage of vulnerabilities in, and we scanned all the terraform registry and all helm and came back with some shocking results in terms of how insecure some of the defaults. People are doing, people are being awesome and contributing in some of these modules, but it's not necessarily the most secure thing. So even scanning your, your third party modules is really important. Another con and a little bit less so at the moment, but I've seen it happen is template squatting. You see this in terms of images, but not so much in terms of helm, but I think it's merely a matter of time where you squat on a known famous helm chart, for example, and you just tweak the title a little bit, change the trust boundaries, and suddenly you're deploying a vulnerable version of what you thought was a good thing. Right. The other con for infrastructure as code is that we don't tend to update it as often as we do application code. So we can do it once, we can scan it once, we could think it's great. And then just the world changes around us and it ages over time. So something to notice that we need to be attentive and have regular scans of infrastructure as code, even if it's deployed securely. All right. Layer one, securing the code, the actual application source code. This is old school stuff. I think I used to work in SAST and it's just scanning the code pipeline, scanning the tools, scanning in the code pipeline to make sure we're not creating more vulnerabilities. It's looking for the unknown unknowns, not the known vulnerabilities. We're looking for the fact that we're not screwing something else up. Humans are involved. This is a stat. One of my previous lives, I used to work for a company called Synopsys and it was a data SAST tool called Cuberty and we had a lot of research and statistically for every 1000 lines of code you write, you create a bug or an insecurity. This was a pretty accurate number over a very large data set. So think about that. If you're an active coder, that's kind of the regularity. So running things like SAST, although they can be a bit of a pain, they can be notoriously slow is absolutely mission critical. There's a lot of very cool things out there like guardrails is a good example of a SAST free SAST solution that will plug into your getups and run all sorts of wonderful SAST options regardless of the language you're using. So that's pretty cool. I think people should do that. The pros, shifting left and finding things that are unknown early is a huge advantage because those things tend to be found in pen testing and that's crazily expensive. So you should do it, but don't sacrifice your speed or your velocity to make that happen. There are fast tools out there like SEMgrep is a great modern one. I should add that to my slide. Gosec for go blank. These are all really good tools. You can see the cons I've got slow and potentially disruptive. Implementation can be difficult. Technical stacks in most organizations vary madly as we move to cloud native. So it can be difficult to standardize. We have to rely on developers to choose their own tools. Few tools actually are also IDE integrated. There's some great ones. I do a lot of Python, so I use cornflakes in my IDE, which is like a wrap around Flake 8. It's great for Python. ESLint for TypeScript or JavaScript. And then when we get into some of the newer languages like Rust and Golang, we don't get a lot of selection in terms of the tools we want to use. But the good news is that microservices tend to be more secure. When I was there when we were looking at monoliths, lots of problems in spaghetti code. A lot less spaghetti code as we move to cloud native, which is great stuff. All right, securing your dependencies. 80% of software is open source dependencies. That's a rising number. That's why we're doing this. I know I'm going in reverse order right now. 100% of open source vulnerabilities, known vulnerabilities, as opposed to unknown, are known to the bad guys. You know, you're log 4Js. You're Apache struts. These are still out there. And within days of these things being announced, there are bots looking across the internet to find them. The good news is that there's loads of things to do this. In fact, we're really blurring the lines between SCA and image scanning these days. A lot of the tools like Trivi and Chekhov, they do this for you for free. As long as they see a package manager like FilePresent, like for Python or NPM or Maven, these files just list all our dependencies for us. That's amazing. So it's making this problem a lot easier. Now, I'm kind of getting into the pro, so I thought I'd switch slides. Finding known vulnerabilities is great. This is the low hanging fruit. Now, it's, you know, maybe we'll catch all the things you do yourself, but known vulnerabilities, there's no excuse for releasing with known vulnerabilities not really. And there's a lot of open source offerings that can allow you to do that. The cons, it can be difficult to prioritize because if you've got a lot of code, it's very, very difficult to consume all of the vulnerabilities that you might have in a large code base. Very difficult. 15,000 CVEs on average are disclosed per year, 15,000. To think like, you're probably going to have some in your supply chain. Really worth knowing that. The questions you need to ask and to find security tools for us, are the dependencies, are they used? Have I bloated my dependency file or my package manager file with things that I thought I needed but didn't. Thus, creating a whole bunch of sort of false flags in my CVE list is at real risk. Remember the beer, right? Can we use context of deployment to devalue some of the vulnerabilities that might even be 10 out of 10s? It's difficult. This is under cons because this is hard to do. Not a lot of products out there really enable us to make that happen. Another good question. I'm going to leave that. So far I'm leaving the questions to the end because I think they're going to be fine. All right, layer three, securing the image. Very similar to SCA, finding known vulnerabilities in both what we're adding to our image and what's in our base image and the dependencies. It really does overlap with SCA a lot. There is some real gray area, I guess you could say. You can also though check Docker files because there are a lot of best practices you can do in terms of Docker files. For example, if you build an image and you don't say user nobody or user engine X say, it runs as root, which is weird, right? If you exec in, being root in a container image opens a lot of doors, it does. It's bad. Yet it's a default. So scanning an image for best practices and vulnerabilities is awesome. Why do you want to do it? Defaults are dangerously insecure. Our member Docker is still there. Build once, run anywhere. Amazing. Great for ops. Great for developers. Introduces a whole ton of user space, OS dependencies with critical vulnerabilities that completely bypass the security team that has focused their efforts on building secure platforms for us to run our Kubernetes and our containers on. Super secure nodes. Terribly insecure containers. That's a bad thing. And we need new mechanisms to make that happen. Scanning the image in our CI, scanning it in our desktop. This is absolutely critical. One image can scale to thousands. I think I said that earlier. What are the pros of image scanning? It teaches best practices. I want to stop there for a second and show what I mean by that. So I'm still here. I like trivia. Used to work at Aqua. It's a good tool. And it's easy. Installing trivia is brew. It's simple. And if I want to scan an image, I don't even have to have it. I can just say scan image. Let's do this. Let's do Alpine. Alpine is a very popular base image. Hopefully that's all visible. And I'm going to see three. Because what if I don't know the most current super secure version, but I know the major version is three. I'm going to do trivia image Alpine three. And this is probably going to script my broadcast because it's going to try and download a library or something like that. It's running a little. This is the demo. There we go. Great. OS Alpine. Alpine three. Thank you very much. I did not know that. Now I can create a docker file where I can pin Alpine three 15 zero as my dependent base image. And I learned that by using trivia. How strange. And I know it's got no vulnerabilities. What might have happened in another world is I might have gone trivia three dot 14. Let's say let's go back in time. I'm going to guess that there's a 13.1. You're probably going to backfire because it's something I'm scanning. I'm peeking and poking around to find a good version of or maybe I'm checking an old image that I used to use and dependencies. Then things can go wrong. What's amazing about Alpine? It's one of the quickest, quickest in terms of updating vulnerabilities. I haven't got that very far back in time. Only the 13 and I have three critical vulnerabilities. So that's a great example of how I can determine base images I want to use. That's all I'm doing. I'm not even trying to, I'm not even looking for actionable results. And yet I can do that with tributes. There's kind of no reason not to be using tools like that. And it does integrate into things like VSCOPE. So definitely check that out. It's really, really simple. The cons of tools like this, it can become security theater. No doubt. Do I scan images I'm deploying? Do I scan my entire registry? Am I using my entire registry? I can just confuse my entire SCA process when I start adding images to my dependencies that are not in the repo or have I made the image yet? You need to have a real strategy around that. And that can become a problem. Usually you need a little bit of assistance or planning around that. All right. Securing the pipeline. This is a big one. And I probably could spend the rest of our time on it. But I'm going to try and be quick. Software supply chain integrity. This is what it means. We're talking about tools like, and I've got some down there in Toto, ChainGuard, 6-DAR, Cosine. These are things, Connoisseur. I was playing with that recently. This is a super complicated and very, very open to assistance if you want to become a part of CNCF supply chain. And it needs work. And it involves signing dependencies, artifact integrity, workflow integrity, which may not be something that is on your radar. It's very easy to game a GitHub action. There's a lot of malicious things you can do in there. Without showing an example, there is a way if you have a GitHub action that triggers on an issue, that you can actually add code to the name of the issue that gets executed. There's a lot of weird things you can do with that. Also, secrets integrity. There's a whole lot of tools out there that look for secrets in GitHub repo or GitHub itself does it for you that look to make sure you're not accidentally embedding passwords or API keys in there. This is all part of your supply chain. And this is something we haven't paid that much attention to over the years. Why are we doing it? Well, the supply chain or the inner bowels of the supply chain, the CI CD servers, they have the keys to the kingdom. They have access to your Git. They have access to your code. They have access to your secrets. They have access to your keys. They have access to deploy into your runtime. They have access to everything. And I think I've already probably said it. The SolarWinds idea, we saw it. We saw it last year. There was a few really good example of man in the middle of attack, somebody finding a weak link and getting into the supply chain and injecting their own dependencies that have malicious content. Ensure your code is still your code. Now, I gave you just to flip back. There's some great projects down there that you should investigate if you're keen on getting more information on how you can do this. A lot of it is in its infancy. And you can see in some of my cons, some of the solutions are very difficult to deploy at scale, but and few commercial solutions are available. Problem is huge, but solutions are on the way. So keep your finger on the pulse of that. Securing the deployment. I actually looked a little bit at that just now. Best practices for Kubernetes objects. Operational risk combined with security risk. Why? Because defaults are dangerously insecure. Of course, they are. Often we have no security context in our defaults. And these bring essential context to the use of all vulnerabilities. Our image deployment, our vulnerability management, there are deployment manifests or our Terraform that is calling it to play code that could be vulnerable gives us some absolutely amazing insight as to whether we need to react in a world where we are flooded with vulnerabilities and how I'm wondering how to manage them. This is essentially the answer to that. Now, I'm going to give you a little quick example of that. Whereas let's go over here. Oh, this kind of sort. Here we go. Here's an example of a Kubernetes deployment. And I'm looking at it in VS code. Now, I think it's pretty good, right? And I'm looking for, well, what's the one of the best practices around this? And I can see down here. So I've been using an infrastructure as code tool. It's, I mean, it's called checkoff. And I can look at this here and I can see, okay, so I'm missing a few things, not too many. And I can use the, I can take advantage of all of this in IDE information. I would go down and take a look at this particular one again. And this I've made a few changes. This one, I genuinely think this is my prod one. I think it's, I think it's pretty much good to go and I'm ready. And I rolled out and I can see ensure service account tokens are only mounted when necessary. Oh, that's a good point. Service account tokens in Kubernetes are mounted by default. Well, I can fix that pretty easy with a single, a single line, amazingly. I can, I can put it up here. I can put it down the bottom. Let's pop it just here. Get my indenting right. YAML doesn't like bad indenting. I save it. I see my security tool starting in the scan. There's a lot of tools that do this that trigger based on save, both in JetBrains, both in VS code. And we're good to go. If you, probably heard to see, but there's a tick box there. Look at that. I have no vulnerabilities. I'm ready to check that in. And I'm secure. And this is the kind of little nudge towards security that are becoming easier and easier, particularly for Kubernetes objects. It's very easy to make that happen. Super simple. What blows my mind a little bit is that if you Google Nginx deployment, the first link you'll get will be the actual Kubernetes documentation. And it says, here's a good example. And it's functionally will work, but it doesn't even mention security context. You're really going to become fundamentally educated to understand how you can do that. All right. What are the pros? Many open source tools. Doing things like this, the way we wrap up our code and our applications, lots of tools can help us. KubeLenter, check off, KubeScore kicks. Many of them are easy to use. Few do what I keep saying. Open source tools that combine image vulnerabilities with their context. Few and far between. In fact, right now I can't think of one. I know that we're working on it because it's hard to do, but I don't know many that really do it. The other thing is that different deployment IEC languages, they can subvert your findings. Now, I'm using customize in my example on the command line here. If you look at the application, I have a base deployment. And I have in customize what it does is it adds. So you have a base and then you can override like a class structure. So I can say just add these things to me. Like if I were to look at my customize overlays, test deployment, that's not really a valid deployment. It's just got replicas. It's an addition to my base. So scanning that for vulnerabilities would be like, hey, what happens? You need a tool that actually understands some of these formats like Helm that need to be rendered in advance. So thankfully, the tool I'm using does that, but keep that in mind. One other thing I would like to mention is that I want to skip past it. Nondeclarative abstraction. So this is where I'm going to go to Diego's first question in there. And that is Pulumi instead of Terraform. Pulumi is excellent because it uses languages you already know. It uses what is called like Turing complete languages to add all sorts of awesome things to loop through and create infrastructure as code. Now, in the end, intermediary, this is an example of CDKates. It creates because the destination wants the declarative definition. You know, CDKates creates YAML that gets deployed. The problem we have with these languages at the moment from a security perspective is that yes, we can scan the results before they go live. But much like the same problem I used to have with TypeScript, when it used to get translated into console, consumable JavaScript, is that if we found a problem, how do you translate that back to the code you've written? You end up with something that is easier for ops development, but much harder from a security perspective to make it easy to show where the problem over there on the right maps to the code you've on the left. Now, do I think tools like that are amazing? Yes. Do I think we have a competent security solution for them? Not yet. They kind of combine imperative and declarative together. So it's complicated. This is where I was saying security and deployment. Abstractions make it difficult to scan the code and trace misconfigurations back to generated YAML, back to source. I'm going to touch on this question. That's a very good question. This is cool. I'll mention Teregrin. The importance of supply chain context part two. So this is the example I was trying to lead up to. If I have a container that has a CVE with a CVSS score of 9.8, it's in a back-end service, no external connectivity, not running as privileged. And I'm using a runtime tool, let's say like Falco, to record a baseline of its process activity so I know anything weird happens. Do I care about it? Probably not. I mean I do, but I'm probably a bigger fish to fry. I might have a CVSS 7.6 and a front-end service that is behind a load balancer with an exposed port and has a base image that like Ubuntu that has all the bad things that curl, W get, and map, and a center. Do I care about that one? Yeah, I do actually. That stresses me out more than the other one. And I think this is the kind of problem that we need to start tackling from a supply chain per perspective. All right, touching on runtime. Maintaining the status security. We're already doing a great job here. Security people are, this is what we've done for ages. Intrusion detection, endpoint detection, like this is kind of what we, this is where we, a sweet spot for some people. There are new technologies like EBPF. I just mentioned Falco and Tracy that can do very unintrusive monitoring of what is happening within our runtime. And in fact, things like EBPF are being extended into not just monitoring, but network policies, service meshes, all sorts of wonderful things that actually are improving performance and security. So I think that's pretty awesome. Code-based runtime. I would say this is the very last moment. Almost like the CD is admission controllers. And there's a lot of very cool ones out there that apply policy, but think of it admission controller. Personally, I think of it like the bouncer at the door to a club. The bouncer is there to make sure that you're not carrying a baseball bat and that you're not wearing the wrong shoes. They don't care if you have a criminal record. They don't know what you've got. You know, they don't need to know your backstory. So applying hard policy as an admission controller can be very punishing. It's good to have things like if you're going to use OPA, use OPA tools all throughout your pipeline, all the way to your desktop. You're going to use check off, use check off all the way. Consistency of policy and security as policy is absolutely important if you're going to start using those things in runtime to look for problems with running containers, running deployments, changes to running deployments, or even blocking entry and anything like that. So there's a real good, there's a strategy and philosophy there that needs to be addressed. Detecting unknown unknowns is you have to have monitoring. You have to have runtime anomaly detection. Expensive, late, and traces misconfigurations back to code, back to code. Very difficult to find things in runtime and identify where you're fixing and get a preaching get ops. How do you trace problems in runtime back to your get? It's hard. There are some new tools out there. One's called your bio R that does tagging in get that you can then see in your runtime. It's one of the few ways that I've seen you can make that happen and you can have it for free. So that's something you can do there. But that's still a little bit of a gap. All right. I jumped in some questions on a moment, but I'll do some key takeaways first. One is that there's no silver bullet. Defense and depth is totally critical, right? No one level of our cake is flawless. Shifting lift is good. It means more people means less, more hands makes light work, essentially. Shift middle, shift everywhere. Forget shift left. Do it everywhere. Everything is code is important. Get ops at rest in runtime and declarative equals deterministic and less probabilistic. So I've overrun a little bit. Sorry about that. I'm going to put the thank you up there. I'm going to look at some of the questions that I've got. I'm probably going to go and reverse. Where do S-bombs fit into these software factory activities? Well, S-bombs can be output in just by just about every tool that detects vulnerabilities, particularly in terms of SCA, but also in terms of misconfigurations. For example, Chekhov will produce a Cyclone DX S-bomb. And what that's good is where we're talking about exchanging software, a software bill of materials for the people who don't know what an S-bomb is, means you've shown that this is exactly what you're using. These are the versions. There's attestation as to what is in your software. And these are absolutely critical for software exchanges in the modern era. And even the oven I just bought came with an S-bomb, believe it or not. So these things are absolutely critical for, let's say, B2B transactions of software. I'm going to go back up. There's something that says the same question for TerraGrant. What was the previous question? Infrastructure as code security practices inside software architecture design patterns with archetypes. Reusable for those notions of security. I mean, seek more IAC templates. I'm not sure the question. Lowering IAC code maintenance. New TFC syntax. I'm not sure. I'm not sure. It sounds like good advice, but I'm not sure it's a question, so I might skip the answer. But the TerraGrant question is a good question by Yaron. TerraGrant is a little bit like customized for Terraform. Terraform enthusiasts would argue that TerraGrant is no longer required, but it does muddy the waters in terms of scanning for infrastructure as code misconfigurations because I don't know of a lot of tools that can handle TerraGrant. So this is more where I'm coming back to. TerraGrant is very handy for a lot of people. I think I used to love it, but when I got my security hat on, I realized I was creating a lot of problems, a little bit like the blooming scenario. So that's kind of where I'm coming from on TerraGrant. Some people are tied into it, but if you can get out of it, I think it's a good idea because I kind of feel like it's a fading technology. Sorry, TerraGrant. Lowering infrastructure as code maintenance mostly for version changes on providers or huge language. Do I have advice if you're talking about new Terraform syntax, a strategy for lowering the maintenance? That I can't necessarily provide advice on. I don't know. It's a security question either, but if I could, I would. Diego, you have a lot of questions. Tell you what I'm going to suggest is that Slack.bridgecrew.io. Slack.bridge and Anna, if you're there, if you can push them into the chat. We have a Slack channel where you can find me and you can even find our founders of our company who are absolutely live for Terraform and love conversations about how to, how to, how to create strategy around this. So I'm going to leave you, that's my last link I'm going to leave you with. So you can get in conversation also on the CNCF Slack. If you're on there already, then find me there and let's continue the, let's continue the conversation. Oh, I'll save me. Yes, you can add me on LinkedIn. There's the other question. Okay, great. Well, we might have time for one more question before we wrap up. When will CPEs be mandatory and when? And speedics or Cycling DX reports should be when I, I think I read it, when and how I don't know the answer. I'm getting a philosophical question about software bill of materials mandatory content. I think, I think software bill of materials is like, I wish that would mature faster. I think the best way to make sure you get what you want out of those is to join the communities that are developing them because they're all extremely open source related. So that, you know, be the future you want, philosophical finish. Love it. Thank you so much, Steve. Any closing words? So for supply chain is hard. And I think actually what I just said is an important thing. Open source is the way to begin everything. Getting, getting involved with the CNCF, getting involved with the Linux foundation. That is a fantastic way to start your security journey in an easy way and a low cost. And from there, you'll find a natural path to making some of the, the major solutions happen for you. Plus, as you can see from my presentation, a lot of gaps in supply chain. So all hands on deck. It'd be great to see you join us in this, in this challenge. Love that. Thank you so much, Steve. Thank you for being here and presenting on this topic. We are so excited that you were able to join us. And thank you to all the participants for being here today as well. This recording will be on the Linux foundation YouTube page if you'd like to relive the moments from today. And we hope to join you that you join us at future webinars. Thank you so much, everybody.