 Hello everyone. Thank you very much for coming to our talk. This is a look under the hood of CNCF security audits. My name is Adam Kojinski and I'm giving this talk together with David Kojinski. This is the agenda for our talk. We will go through what the CNCF security audits are, the internal mechanics of the audits, some insights and results and outcomes from six security audits that we have carried out and how the community can get involved in the ongoing security work. You may have seen stuff like this on Twitter or the CNCF's own blog or a CNCF projects blog as well. And in this talk we go through the work that goes into getting to this place where we announce the findings and we wrap up the security audit. So the projects that we have audited over the last year, so this will be a talk about six audits we have done throughout the last year, maybe a little bit more than last year. And these are the projects, Argo, Cilium, Cryo, Istio, Cube Edge and Flux. So we will mainly be speaking out of these. Sometimes we will generalize to a bit more audits because there's more CNCF security audits going on and there will also be some insights about that. So the position of CNCF security audits and those that we talk about now is, first and foremost, they are made available by the CNCF itself and then also the open source technology improvement fund. And both of these organizations help commission the audits and facilitate them, help some of the communication with the retainers and all sorts of things that are necessary to carry out these audits. So thanks a lot to both of these organizations. And in a sense, because both of the organizations that get this work sort of started, it is like a goal in that sense becomes we want to make open source security audits. That means a lot of the things that happens in the audit are in fact open for everyone to see. And it's an ongoing effort. There are links to some blog posts that summarize audits of a lot more projects than just these six months. And this is also just these security audits that we present here are also just a small part of the whole engagement that CNCF does in terms of securing the CNCF landscape. Other things includes security automation by way of fussing. And we included a small blog post here, which summarizes a lot of the results from that. So what is a CNCF security audit? It's kind of it's a time box engagement, although it's a bit flexible. So usually the access involved are the project maintainers, the auditors Adam and I, and then also the facilitators OSTIF open source technology improvement fund. And on average, it takes around six weeks a project. Approximately four of those weeks are like full time work, looking at a project and doing all the work. And then there's a follow up period where we show them a show them as a report, we share a lot of the findings, we go over fixes, we kind of like this, like almost like a post processing to all the work done, all the core work. And in general, we call it a holistic approach to security, because we don't just look at the code and report a bunch of issues. And that's it. We do a lot of things such as we look at the threat modeling of a given project, we look at the documentation, we do a lot of security, sort of code ordering, we integrate automation tools as well. And in essence, look at the security from of the, of the projects from a first principle perspective, almost we want to understand what their needs are. And each project has a different set of needs and also a different set of preferences with, with respect to security. So that also involves a lot of essentially discussion with the maintainers. So in a little bit more pragmatic way, a CNCF security audit, the output of that is a set of upstream code changes, either fixes or something in that, of that nature, upstream documentation changes. For example, if a project is, if you deploy it in a, in a, so like default way has some insecure settings, this needs to be specified. Or if you use them settings, this needs to be specified and stuff like that. It's all, it, the output of a CNCF security audit is also a usually a list of security advisories detailing some of the, the findings is also an audit report and then also an audit announcement, which were the, the links Adam showed. So what are the, the sort of the process or the mechanics of a CNCF security audit? Usually we ate a logics and the facilitators will have an introductory meeting with the, with the maintainers where we are going to just set up the, the management of the, of the project, discuss communication channels, outline expectations and that the, that the maintainers tell us a little bit of what they think is important from a security posture. They will often guide us in a certain direction, tell us about a certain piece of code that is recent or has some complexity in it and so on, which can help us sort of guide the focus areas of the audit. We then have the, what we call the audit process, which is like the main, basically the four weeks and we regular, hold regular meetings during those four weeks where we just discuss the status and these are either weekly or bi-weekly and then we also have a lot of sort of ongoing communication based on the findings that, that we find and the output already went through that upstream code changes. So all of the work we do is essentially aligned with what the maintainers want. So if you are a maintainer of a CNCF project, we will kind of engage with you on your terms and I've included a link here to a CNCF maintains a list of all the security audit reports of all the CNCF projects that are out there, at least the ones they commissioned and I think in the link that I show here, there's around 50 reports, sorry 50 audit reports approximately, maybe a little less. And this is, so like this has happened since I think 2016 or so. So to give you an example of how a report looks and what the audit specifically contains, here's the table of contents of the Istio service mesh report where the first, so like first couple of bullet points in the table of contents are just table of contents, executive summary, notable findings, project summary, so fairly sort of small paragraphs. And then we come down to, for example, we have a bullet point with fossing, which introduces how we set up continuous fossing for Istio. We come up with a threat model, we then list the issues found, which has basically the largest part of the report from page 17 to 50. And then we also have in this case, sales and compliance review, which is a software supply chain assessment. So what's perhaps to take away from this is that it is really holistic in that there's a lot of content in it, a lot of different things. And not all projects will have the exact same table of content, but usually they will have quite a selection of different activities and not just here all the issues found, or here's the sales supply chain review and so on. The results summarized usually looks something like this, and this is also taken from the Istio example, where we kind of like try to highlight the most important bits relative to the audience of the report. And the audience of the report is not always clear. Sometimes we write the report with the intention of communicating it only to the maintainers. Sometimes it's both to the customers of the given open source project. Sometimes it's to some, so essentially the audience is not always, there's a lot of different people looking at the reports. So we have to encapsulate a bit of everything. But usually it lists amongst us the issues found. And we also have for each of the issues that we find a highly technical description. So this is just the sort of introduction to a given issue found, but we usually have a highly technical description with code issues and so on for the given report. So if you are a developer, if you sit in a company and use Istio, you should be able to read the report and quickly go over where is it that it's all like, what are the code issues and quickly access where the, for example, your organization is affected or not. Here's a list of all, if you're interested, this is more of an artifact slide where you can see the announcements of all of the six audits that we're talking about in this presentation. So let's go through some of the goals and focus areas that we look at when we audit the CNCF projects. First of all, we look at the threat model. This usually starts in the, or so far it has always started in from the start of the audit and kind of guides all efforts in the audits. And some things that we look at in the threat model are the attack service. And that is an attack service is the area where an malicious actor could and a malicious actor could seek to penetrate the system, so to say, to launch an attack. Then we look at who the threat actors are. And these are personas that could be malicious, but not necessarily are. But we typically seek to map a given vulnerability to a threat actor when we do the audit, the code auditing later in the audit. Then we map out the threat actor's goals, what they seek to do there, and the priority of the different, we try to map the priority of what an attacker may seek to achieve when launching an attack. Then we put ourselves in the position of an attacker thinking, what would we do if we were malicious? One of the things we would then look at if we were malicious were particularly critical code parts. And that can be, if particularly interesting things happen in the code, that could be dangerous if an attacker could achieve control over that part of the code. And then we also consider whether an attacker can do fairly low effort attacks to achieve control and malicious control over the project. Then we go through the code base of the audit of the project under audit. And when we do that, we think of the threat model all throughout. And one of the things that we look at is whether the project defends against the threats and the threat actors identified during threat modeling. We then look for vulnerabilities in the code, code, code issues, design, design issues, et cetera. And during that process, we also look at weak code parts in which we think of whether a certain component is hardened enough against potential attacks. Then we think of the code base from a general perspective, whether the general approach to security is mature. And an example of a finding from the manual code auditing part is look something like this. David mentioned that an output are security advisories. And this is an example from the audit that we did with the cryo project, where we found this high severity CVE in one of the APIs here in XSync. Next, we look at security tooling of the project. How is static analysis? How do they, how much dynamic analysis do they do? And whether this is set up to run continuously and as well in the CI pipeline. For example, we look at the forcing efforts of a given project. And here we highlight why forcing is very important with reference to the OSS project that has found almost 9,000 vulnerabilities by way of forcing open source projects. And this is only in open source projects. And it's worth mentioning here that CNCF has approximately 30 projects integrated into CNCF. I think the accumulated set of bugs found by these integrations is, if we exclude two or three projects, which are big players, it's around 400 or so issues, I think. Not all, like there will be false positives, but it's a lot. And then if we include also the one, like the big players, which are memory unsafe programs, they will usually have a lot more found by forcing, but then we are in the thousands easily. So CNCF does have a lot of, gets a lot of impact from this type of integrating, in particular, forcing into their projects. David mentioned that CNCF does this in another engagement, more focused, and they regularly publish blog posts about this on the blog. Next, we look at the security advisory disclosure process of the given project. And we primarily consider maturity here and industry standards. So we consider what happens. First of all, we consider whether an open source contributor can even submit and file an advisory or disclosure. And this involves having a proper security policy in place that both exists as well as outlines what should be included. And other things we think of when we consider the security disclosure policy is what are the promises? Do they follow industry standards? For example, when does the project aim to follow up? What are the follow up question steps for the contributors? And our goal here is to make it easy for the contributors with good intentions to contribute security work to the CNCF projects. We want to make sure that the community can be engaged in a positive way. Next, we look at the general source code maintenance, the review process, the CI pipeline. And we look at the inline documentation, for example, are there excessive misses in terms of to do that have already been resolved, which can have deeper underlying issues at times. Then we look at that code, which interestingly, you often have some kind of security issues. And in those cases, the maintainers just remove the dead code. But that is an interesting area because it may be dead now, or when we perform the audit, but a contributor may find it or seek to use that later and may use vulnerable code by accident. Then we look at something like the exported or the public and internal APIs, whether they are the assumptions here are correct. Then we look at the release process of the artifacts. Some things we look at are who can build the release artifacts, the environment they were built in, how and where, and how the secrets are managed during the build process. And then we look at whether the provenance, whether the project issues a provenance statement. And with regards to the environment, we look at something like isolation, whether it's running with or without network connectivity, how it is connected to the outer world when the artifact is being built, or whether the environment is provisioned solely for building the artifact, or whether it's reused for other purposes. And a pitfall here may be if a developer builds it on their local machine, then the environment may be exposed to a bunch of security issues. And it's worth mentioning here that the bullet points on the right are really just how compliant is a given open source project with SELSA, which is this OpenSSF or it comes now. Yeah, exactly. So like David mentioned, SELSA, we can start with a problem here that there was the solar winds issue in 2020. And in response to that, some open source efforts were put in place to create the SELSA framework, which aims to counter threats which were visible in the solar winds attack. And some actors claimed that SELSA would have countered the solar winds attack. It was certainly a goal with SELSA, and solar winds also adopted the SELSA framework in their own pipeline, release pipeline. And we want to ensure the same for the CNCF projects that they have standards like this in their own release processes. Then we look at deployment and usage from a user perspective. And the overall thought here is whether a user can use this product securely, or whether they will, and which should not be the case, whether they will deploy it and do a bunch of mistakes in their security when they deploy the product. And some things here are whether the product is secured by default, whether documentation is in place to make sure that adopters can deploy the products securely, and whether all trade-offs are documented properly. And a trade-off is, for example, that some users may prefer some insecure settings, which is fine in their own deployment and not fine for other users that deploy the product in another manner. And a part of that is, for example, in which environment you deploy the product. If you only use it internally and you have to go through a bunch of security measures to get to the product, authentication, for example, if you deploy a product behind authentication, then it's fine to assume a little less security, a little lower security posture, whereas other users that expose the product without authentication to the internet should be more careful. So let's talk about some of the security findings from the six audits that we have carried out. So like David mentioned, we are behind the technical part of the security audit. So we go through all these steps, and in total we have identified over the six audits, 89 security issues. And these are not code vulnerabilities, all of them. These are security issues across all the different measures that we went through in the previous five or so slides. I think most of them, most of the moderate, high, and critical are code issues. Usually they go a little bit more hand in hand. Yeah, completely. So the documentation issues may spill over into the moderate, but are typically in the lower part of severity. So to go through 20 informational, 32 low, 28 moderate security issues, one high and one critical. And this critical was security vulnerability. And of the 89 security issues, 21 were assigned. And to go through those, one was not scored. That was actually a security vulnerability in Golang that was found during the Istio audit. These are CVSS, right? Yeah, exactly. So this is CVSS scoring. One was not scored. And so Golang, I believe doesn't score their vulnerabilities. And this one wasn't. Then we found one low CV, 13 moderate, six high, and one critical. Of the classes of vulnerabilities that we have found, we can list these seven, one command injection, two arbitrary file rates, one case of sending sensitive data over an unencrypted connection. And I should mention that command injection is in case the program executes some kind of commands and you are able to manage some parameters in that process. Arbitrary file reaches when a threat actor is able to read files from outside, so to say, sending sensitive data over an unencrypted connection could allow traffic sniffing and capturing this sensitive data. Then we have found 13 cases of denial of service, which is where the threat actor can disrupt the availability of the given software product. Two cases of cross-size scripting, which is the ability to execute code in the browser. One use of insecure cryptographic primitive for sensitive data and one case of HTTP request smuggling. So when we carried out these audits, most of the projects have already had ongoing or previous security work. And one thing that we find is that, so when repeated security efforts does pay off, and we have seen that in multiple cases. One case of this was during our ARGO project where the project had a security audit conducted in 2021. The research has found 57 security issues. And one year later, they had another security audit in which we found 26 security issues and nine CVs. And this is not an isolated example of ongoing rewards from ongoing security work. The Istio audit, during our Istio audit, so when we conducted it in 2022, we found 11 security issues. And during the process, the Istio maintainers, when they were triaging, one of our reports found a CV in Golang. And in the audit one year before, the auditors found 18 issues. And as well with the cryo, in our cryo audit. So in the cryo audit, the example here is that more efforts give more results. And when we started cryo, it was just a few months after critical security vulnerability was reported in cryo. So even though there had been some private auditing going on by, in this case, some crowd triage researchers, there were still a few issues to be found in cryo. So I guess this is also, in a sense, a call to, well, more resources help find more issues. And interestingly, this was actually, so we reported an issue to cryo and then the cryo maintainers were excellent at handling it and also identifying that this Istio actually affects some other pieces of software. So they should also have some credit in terms of promote like propagating the issues found, which was great. So getting involved. So all of the work gets shared publicly. And a goal of the auditors is to essentially get more people to come and help promote the security of these projects. This is also why some of the reports are made in a way that if you are a security researcher and you read it, you will sort of know what the next steps are in terms of, or it should be easy to identify what some next steps are in order to approach the security and analyze the security of a given software. So when we have done this audit, we by no means have like, that's not finished work in a sense. There's still a lot more to do usually. And the way to get involved, read the reports, understand the threat model in particular. Some projects are very focused on that. The way they consider vulnerabilities might be different from what, for example, what a security researcher considers a vulnerability. Carry out your own auditing and report the vulnerabilities to the projects. Right. And in some case, for example, for example, shout out to the Agro project that has a bug bounty in place and you may be able to collect a reward when, if you submit a successful disclosure, which they put in place after the audit, right? Yeah. So that's, that's another good example from Agro where after the audit, they saw the benefit of sort of having third parties come and address security issues in the project. And that made them set up a bug bouncy, which allegedly they have had good results from. And when you, when you are anyone in the community submits disclosures to these projects, try and be helpful and include as much of your work that you can in terms of even if you can provide a fix or your own root cause analysis, et cetera, more information about your, in your disclosure typically means it quicker turn around time. Conclusions. So since you have carried out, carries out security audits continuously, they sponsor it to have organizations like us come and address these projects. We work closely with the maintenance. The goal is really to like make it a community effort as much as possible. And we try to sort of let the maintenance guide us as much as they want. And so far it has been pretty successful. The goal of the audits is both to look at the source code, like specifically the code, but also the whole ecosystem around the software. And the audits in the audits, we six orders, we carried out found just about 90 issues with 20s. We also did a lot of improvements and sort of in the context of security automation. So it's not just the issues that matter. In general, it's probably half of what we consider that matters. There's a lot of integrated automated security tooling after a given audit, as we consider sort of the continuity of the security really important. And since you have maintained a list of all security audits, please do check out this list. If you are dependent on a project, if you're willing to be interested in like if you're interested in engaging with them, I'm sure that they're happy to. That's it from our side. Thank you very much.