 Felly hi, mynd yw Steve Jeds, credu peir yw jetstack. Fy holyw'r fyrwyr yw ymgyntion gyda jetstack And the Feng sector's improve the security of their software supply chains. As you know, that's kind of a big area. Big topic! I'm going to cover a particular aspect which is to talk about how you can improve your knowledge and due diligence around the external dependencies that you accept into your organizations and till your codebases. A'r tyfan, fel y tawch sy'n dwy'n ddechrau, byddwch yn ddechrau sy'n dweud bod ddechrau'n froiolaethau ac eu wneud ynol rhywun sy'n meddwl ripe pan wleithio'r iawn. Felly ryn ôl yn dweud y tawch ar ddweud y tawch. Wrth gwrs, rwy'n gwybod y taw hwn yn wneud y cydweithio y taw'ríau ychwanegon, a'r ddrwy'n gwrdd yr yw ddwylliannol, Then I will move on and explain the, if you like, the outcomes and the benefits you should expect to gain from getting to understand your dependencies better and then finally I'm going to talk about ways in which we have helped clients actually to achieve this goal of getting to know your dependencies better. So first of all I thought I'd xERY everyone with a few cherry picked symbol o'r cyflwyf yn y cyflwyf yn cyflwyf. Felly, y un i'r cyflwyf yn y cyflwyf yw'r cynoxys ar y glwyd, ar y llwyffol, y dynyn ni'n gwneud ar ei ddau cyflwyf ar y cynllun sy'n cyflwyf yn 2020. So, byddwn i'n ddim yn dwi'n gwneud. Ond, gallai cyflwyf yn y cyflwyf wedi'i wneud yma yn y cyflwyf arall, ac yn ychydig i'r cyflwyff, mae'n mynd i'n gwneud ei bod yn 90% ac mae'n baut yn i'r cyflwyff, mae'n ddiolch. A'r ddau'r ddim yn ddod amdano o'r wneud, dyma'r ddau'r ddau'r ddau diwethaf o'r cyfreisio ar gael cyfrifiadau i'r ddau, os ydych chi'n fynd i'r cyfrifiadau ar gyfer hynny, a ddau'n ddau'r ddau, mae o'r ddau sydd o'r ddau'r ddau'r ddau'r cyfrifiadau i'r ddau'r cyfrifiadau i'r ddau'r cyfrifiadau i'r ddau'r ddau. have at least one high-risk vulnerability in them. Also interestingly, I think, is about half of the code bases that were surveyed have some kind of open source license conflict. I think this is a particular area that doesn't get a huge amount of attention but I'll talk a little bit more about that later on. Now, I've shown you how I can expertly cherry pick a few statistics gyda'n dod o'r part에게. Here's a couple of personal anecdotal observations made because I've spent the last 10 years also, not just as a consultant but also as a tech team lead, developer team lead, head of platform and so on. And I think it's fair to say and not at all disrespectful is that software engineers really don't know a great deal about the dependencies that they're using within their software applications. And the reason I say that is there are a lot of dependencies, not just the direct dependencies, but also the indirect transient dependencies. So I looked at Jetstack's cert manager product a few weeks ago, and it's probably got in the region of 1500 dependencies, of which only about 30 of those are direct, i.e. the developer deliberately chose to import them into their software. It's just not practical to be able to understand and have much knowledge about such a vast array of dependencies in your typical applications. So if you don't know very much about your dependencies, how are you able to trust them? And if you can't trust them and you don't know much about them, how can you gauge the level of risk that these dependencies are going to apply to your environments and your companies? Okay, so hopefully I've persuaded you of the need to understand the dependencies, the Docker images that you use within your organisations. Significantly better than I think most organisations do currently. But the next part of this talk, I'm going to go through perhaps what the outcomes and the benefits that you should be looking to achieve by getting to know your dependencies better. So first of all, we're looking at an improvement in understanding of the security posture of these dependencies. And a kind of obvious example of that is what sort of vulnerabilities are present in these dependencies. And no doubt all of you are running vulnerability scanners in your CI pipelines in your runtime environment. But on top of that, I think it's also important to get a good sense of like the security hygiene that if you like a health check score about all of these dependencies. And I think if you were in one of the sessions this morning, someone was speaking about the open SSF score card product. If you haven't seen this, I would encourage you to have a look. It's a great tool that will give you a series of scores and based on a set of metrics around the way that the source, the GitHub source code, the hygiene rules and so on. Secondly, and this one, to be honest, to achieve this in its entirety is a big ask. It's a non-trivial activity. And what we're trying to do here is put in some automation via admission controllers like Kyverno or Gateway such that only approved artifacts, docker images can actually be deployed into your Kubernetes clusters. I will talk a little bit more about this later on in my talk. But as a good first step, looking at creating a trusted registry from which it is mandatory to pull your images in your code rather than going direct to the public internet.key.io and so on. As I've already said, I think open source license compliance is kind of a thing. As in, there are a lot of code bases that don't fulfill all of their obligations. So having some assistance to help you understand what all of those open source licenses are in your images and your code bases is kind of helpful. Now this fourth one, what we've kind of noticed over the last 12 months especially is that companies that consume software from external software suppliers are starting to get quite keen on understanding the list of dependencies in that software that they're taking. And a good way of providing this inventory is to use some kind of SBOM. In Jetstack, we've had a lot of success using the Cyclone DX SBOM format. Another popular choice is SPDX. Fifthly, you've now created all of this metadata, you've collected it together. It would be really useful to be able to make that metadata very visible via dashboards to make it easy to consume for interested stakeholders, such as security teams and dev teams. And later on in this talk, I'll offer up some suggestions about how that can be achieved. And then finally, this isn't particularly a benefit, but I think it's kind of important to emphasise is that all of this kind of extra due diligence activity and the workflows to make it happen, really you want to avoid having much impact on developer velocity. Okay, so I've gone through why I think it's important that organisations and teams get to sort of better understand and know their dependencies and also what kind of benefits and outcomes you can expect to achieve. Now I'm going to go through ways in which we've helped some of our clients actually achieve the outcomes that I've been talking about. So first of all, and I think this is kind of a fundamental cornerstone, is to have this trusted registry. So rather than allowing developers or the CI pipelines or even the runtime environments, rather than allowing them to pull images directly off of the internet from docker.herb or key.io, you only allow them to pull from your trusted registry. We've had a lot of success with Artifactory and Harbour, but obviously there are other alternatives out there. Once you've now got this situation where the images are being pushed into your trusted registry before being pulled by the various components like developers, CI pipelines and so on, what you can now do is you can start to kick off security-based workflow pipelines that can start to evaluate those new images as they appear in this trusted registry. This particular workflow pipeline, we used Argo workflows to execute it for a particular client and I'll go through the kind of steps that this workflow does. So the very first thing it does when a new image appears in your trusted registry is it generates an SBOM for that particular image and we use a tool from Ancor called SIFT to generate this SBOM and that SBOM will give you a list of all of the components within that docker image. The second step, this really applies only to images that were built internally within the client organization and what we've done here is within the software application CI pipelines, we've injected an additional step that generates an SBOM from the particular software application source code from its go.mod file or its requirements.txt file and when we've got that SBOM we then include that within the image that gets built that contains the compiled binary or whatever and the reason that we create this separate SBOM is because you can get far greater level of detail about the dependencies that have been used in that software application than if you just use a SBOM based on the docker image. So at this point, step two, you potentially got two SBOMs but that's okay because there's a tool from Cyclone DX which will nicely merge those two SBOMs together to give you a single master SBOM as it were. We then use a tool from Sigstore called Cosign and we use that tool to sign the SBOM and then push both the signature and the SBOM as OCI images back into that trusted registry and thanks to the way that the tagging works these new OCI images can be associated back to the image of the original docker image that was triggered this workflow in the first place. And the reason that this is incredibly useful is that then makes that SBOM available to consumers of that docker image. Those consumers might be internal teams but they equally could be an external third party client of yours that's consuming your docker images. And then finally we also push the SBOM into an OWASP tool called dependency track. So the reason that we use dependency track is that it's got some nice features allowing us to get it to automatically run a vulnerability analysis against the inventory that's just come from the SBOM and also a license evaluation. And once dependency tracks done that then it's got a nice web UI that allows you to view all the vulnerabilities in this image that you've just processed and what kind of licensing is there. So this is starting to make this information visible to the interested stakeholders that I was talking about the security teams and the development teams. One of our clients is also using another feature of dependency track which is it's got this policy engine. So you can define security policies things like these are the permitted or forbidden types of open source license that are acceptable in our organization or we have a policy that any vulnerability with a score higher than seven that's not allowed to be used in our organization. So what you can do with dependency track is define these policies it will run the policy engine against those and it will give you some sort of feel about whether the image actually complies or otherwise with your policies. This particular client is also trialling another OWASP tool called defect dojo and this helps the intention is that this will help them manage the vulnerabilities that have been discovered and mitigate those vulnerabilities that have been discovered in the images that have been processed. So at this point what I've done is describe a whole bunch of things and some workflows that we've done for a bunch of clients and the next couple of slides are our this is what we're going to do next to help them better understand their dependencies and do improve their due diligence. So the first one and I don't really expect you to sort of be able to read everything on that screen what I did here was create a Grafana dashboard the idea of this is to kind of create a single pane of glass dashboard that gives you some insights and useful information about the software applications in your organisation that have been processed using the workflows that I've been talking about. If any of you are familiar with a website called depths.dev then this dashboard is heavily inspired by that website and I would recommend you go and have a look at it in any case. So on the left hand side that's really showing a list of all the dependencies in this particular software application. In the top right that's the scorecard score so each of the bars represents a different metric that a scorecard uses beneath that is a table showing a list of the vulnerabilities in this software application and beneath that is a account and a list of all of the open source licenses that were discovered in this software application. In the bottom half of the screen this is if you like a kind of dynamic and interactive dependency graph so this what I did was import all of the dependency graph into Neo4j and then use the tool called NeoViz to be able to allow developers or whoever's viewing this dashboard to be able to explore the relationships between the different dependencies, the direct ones and the transient ones in this software application. And then the other thing that we're going to look at and there's been quite a lot of demos and so on about admission controllers like Kyverno and Gatekeeper so really what we're doing here is using one of those types of admission controller to basically you can write policies that say if this image hasn't been signed or if this image doesn't contain a particular or doesn't possess a particular attestation then don't let it be deployed into this particular runtime environment. Okay so inevitably during this little journey of creating these pipelines and the improved or security workflows we learn a few lessons on the way. The first of which is there are a number of S-bomb generating tools out there they're not all created equal and by that I mean that some are better than others at providing an accurate reflection of the components in the docker images and so on. The ones that the tools that we've had success with are SIFT from Ankle that I already mentioned and also Cyclone DX to a S-bomb generation tool. The second one is something that Adrian brought up in his talk this morning around the fact that scanners of docker images are not magic. If you don't use a standard package tool for installing software into a docker image there is a reasonably good chance that it won't be picked up by one of the scanning tools and therefore if it's not picked up it won't appear in the S-bomb the vulnerabilities that might be associated with that particular binary won't be picked up either. The third one again topical I think yesterday I sat through an interesting talk about Conan C C++ package manager. Sadly our client that uses that develops C and C++ don't use Conan so it was kind of tricky to figure out exactly what the dependencies and their versions were for the C and C++ applications and then finally scale and volume. As we discovered processing these S-bombs takes up a lot of CPU and resource some of these S-bombs get really really huge sort of thousands and thousands of dependencies and so on and this takes a great deal to memory and CPU and if you've suddenly got a surge of images then for us it kind of broke our dependency track implementation and then the other thing to consider is over time you're going to get more and more images and of course you're going to need to regularly vulnerability scan those images probably on a weekly basis and so over time that that chunk of F resource is going to grow larger and larger to kind of do that whole scanning of more and more images. So just before a wrap up here are the kind of key takeaways that I hope you'll get from my talk. Open source as you perfectly well know is is fundamental to the large majority of modern software applications and therefore performing this kind of additional and in-depth due diligence is absolutely crucial in order to you know understand and mitigate the security risks. Clearly automation is absolutely essential it is just not practical to do this stuff with a spreadsheet and sort of manual investigation. I've put up there a few tools that we've had success with with with our implementations and then the final point as I keep mentioning is make sure the developers the CI pipelines and the runtime environments pull from this trusted registry that's in your control. Okay well thank you for listening I hope if you've got any questions then I'm happy to answer them and I've put up a few links here of worthwhile websites to go and have a look. Thank you very much Steve. We do have time for questions who's first. Can you kindly expand upon limitations of the container scanning? Okay so an example might be you've got a docker file and you have a custom way of installing Node.js so you might have in your docker file a command that calls the Node.js zip file and then you have another command that unzips it into a particular custom folder in your docker image. By doing that there's very little chance that your like a scanning tool is going to notice that Node.js runtime and if it doesn't notice it it's not going to be able to report it in the s-bomb and it's not going to be able to identify any vulnerabilities we won't even it you know it won't know the version. So you what your what I'm I suppose I'm really trying to say is don't let your whoever is responsible for creating docker files don't let them do this make sure that they use standard package managers like yum or apt so no no you're absolutely I see what you mean no you're absolutely correct I think you have to take a view um on how you're going to tackle images that come in from the from the outside sorry I didn't quite follow that no no if you're that's exactly it if you're that that's where you know this isn't um this isn't a silver bullet by the way um and and I completely agree with you if you take random images um and and you depend on those random images then you're still leaving yourself open for abuse as it were and and I think that's exactly the point that's exactly the point I'm trying to make is that scanners are not silver bullets either they will not protect you from that kind of activity that yeah and that's a great point that I'm glad was said also with this um did you look at using like in total style layouts or things like that where you actually generate verifiable metadata for checking all this ahead of time rather than look at something post hoc and hoping it's right so where we're going with this is let's just say I don't want perfect to be the enemy of good so I can like I said I completely accept that this is only a step in the in the direction of improving security I think things like in total and the stuff that they're doing is great but it still requires a level of wrapping your head around things which are quite hard to achieve and I think there's some some miles in that journey yet before that's how can I say it easily to understand and apply into existing organizations workflows any other questions no well then thank you very much and cool thank you very much