 Okay, very good. Well, we're just a couple of minutes after the hour. Let's go ahead and get started here. I'll say that, you know, I'm Lee Calcote, I'm the founder of Layer 5 and a CNCF Cloud Native Ambassador. I'll be moderating today's webinar, but we would like to welcome our presenter today, Julian Sobriet, head of product at Octorine. Just before we hand things over to Julian, we've got a couple of housekeeping items. Yes, the slides will be posted. Yes, we are recording today. And the recording of today's webinar should be up on the CNCF's webinar site a little later today. Now, we should note that during today's webinar, while you won't be able to speak, your questions are highly encouraged. There is a Q&A box at the bottom of your screen, so please feel free to drop your questions in there and we'll get to as many of those as we can. As a reminder, this is an official CNCF webinar. And so as such, it's the subject to the CNCF Code of Conduct. So please don't add anything to the chat or ask questions that would be in violation of that code of conduct. But, you know, so basically, you know, be respectful of Julian, our presenter, and your fellow webinar attendees. And with that, it's time to hand things over to Julian. Julian, over to you. Thank you, Lee. Thank you for the introduction. So, yes, I'm a product of the Octarin and I'm just going to give you a few words about Octarin that explain why we created the ACTSS. So, at Octarin, we provide a security solution for communities. One part is covering the runtime with network ideas, data analysis, et cetera. And the second part is looking and enforcing the configuration of codes. One thing we used to show to our customers is look how many of them are running as raw, how many of them are running as privileged, how many of them are kept natural, et cetera. And the question we got from the customer is, okay, but out of all this customer with some risky configuration, out of all of these risky containers, which one I should look at first? Which one is most likely to put my whole cluster at risk? So, we realized that we needed a way to have a holistic security for containers, look at all the security settings of containers and give them a score that the users can use to figure out which one is the most dangerous, which one is the higher risk workload that they should address. The other thing that we understood is that they are about 30 different Kubernetes settings that directly affect the security of the workloads and understanding how these security settings change together to make the security better or worse is hard to understand. And in the end, it's hard to understand what is the actual risk that you are potentially facing. It's not just about best practices, not minimizing the number of work containers, it's really about a specific risk that you want to avoid or you want to remediate. And finally, the third thing we wanted to achieve is give them a solution to remediate the high risk because they might be a good reason why they need to run a container as a road or a privileged containers, but there are a number of other changes that they might be able to do to lower the risk short of being able to just turn off some of these settings. So, the first thing we did is look around at what are the existing security framework that maybe we could use for Kubernetes, the number of them and probably the most famous or the most used today is the CVSS, the Common Vulnerability Scoring System. You can already see the similarity in names between KCCSS and CVSS and that's because we took a lot of inspiration from CVSS. You're probably familiar with it if you are scanning your Docker images. The scanner will give you a list of vulnerabilities and the CVSS rating and explanation and CVSS is very good at describing the risk for these vulnerabilities. It shows what is the impact to confidentiality, integrity, availability of your application or server. It shows what is the potential scope of the vulnerability can it be used to compromise just the application or the entire server or get access to your entire data center. It also explains how easy it is to explore that vulnerability, is it a remote vulnerability or does it require local access, et cetera. So CVSS is very good at describing and measuring the risk associated with individual vulnerabilities. And from this, there is also the CCSS which is CVSS applied to configuration, the Common Configuration Scoring System. So when I first looked at that, so that's great that probably something we can use for Kubernetes configuration. Unfortunately, it's pretty much a dead project. It's based on the version 2.0 of CVSS. CVSS is now in version 3.1 with quite a lot of improvement between 2.0 and 3.0. So that's not something we can use directly but it's interesting to see, you know, the idea of applying CVSS to configuration is something that we definitely did. And the third project that we also looked at is the CC, the Common Configuration Enumeration. Unfortunately, it's also not an active project but what's interesting here is that it's a checklist of configuration settings where CVSS gives you a framework but not really data that are provided by other part of the nation or by vendors. CC is really a checklist that can go through one by one to check the security configuration of your application or server. So we decided that to take the best of these three frameworks to create a CCSS, so it's a framework that also comes with a list of rules just like CC. So we created rules that describe the risk for the different Kubernetes container settings. We describe the risk the same way as we do these CVSS and you'll see that in an instance. So we show what the impact for security are likely to be exploited, et cetera. And what we did is we made it more specific to Kubernetes. So for example, there is a scope in CVSS about what can you potentially impact and we know we change the scope to be the container, the node and the cluster. And probably the most important and probably the hardest as well is that we really in the end wanted to show a risk score for the workload and not for the individual security settings. So we created a new formula that take all of the risk into consideration and give a score for the entire workload. So we have two types of risk. Two types of rules, the risk, which is very similar to CVSS. So we describe what is the impact for the availability of your container or cluster or financiality. Is it likely to be to expose secrets, so I can get access to secrets and integrity. Can you make changes to your container or cluster or whatever scope? And for all of them, so it's rated from none, low, medium to high, again, just like CVSS. We also give a description, we try to make it very specific. So again, not about trying to enforce some best security standards, but really understanding what is the potential risk that you're facing. So in this example, that's the shared host network that's enabled for a container. So we explain that it potentially can expose the container to the internet by binding the container IP to the host IP. And that opens you to those attacks. If you don't have anything in front of it, it can be used to gain access to a container, to an application that's maybe not designed to be exposed to the internet. And also it allows you to do something quite different. Local this time is to be able to sniff the loopback interface on the container, which means give you access, let you see the traffic from other containers. That's why the impact to confidentiality is high, the impact to availability is high, but the impact to integrity is low. Then just like CVSS, we explain how easy it is to exploit it, whether it's something that's exploitable remotely or require local access and what the potential impact. So we do that for all the rules. I think we have about 25 to 30 risk rules today in KCCSS. And I think it's interesting just to read through them and hopefully you learn a few things about what is the actual risk associated with all of these settings. We also learned from a lot of users that maybe the first person who look at it might be a DevOps person who might be more familiar with risk and security, but when they want to share it with the developer, explaining why they need to address the way the particular container is configured, they need to really be able to convey what's the risk and why it should be taken care of. The other type of rule that doesn't exist in CVSS is mitigation. So there are security settings that make your security better. For example, having a service mail that encrypts all of the traffic means in the case before where you could sniff traffic from other containers or encrypted, then you know the risk of exposing CVSS right now. So the remediation are described the same way as risk. We show how they remediate the integrity, confidentiality and availability, whether they remediate local attacks or remote attacks and whether they protect the container or the node or the entire cluster. So the risk core is we first look at all the individual risk associated with the containers. We map the security settings of the container with the risk rule that we have. For each of them, we compute the score from zero, that's the lower risk to 10, and it's just like CVSS. At a high level, the risk has two components. One is based on the impact of the risk. Now if it's impacting, if it has high impact on availability, high impact on confidentiality, high impact on integrity and potentially can compromise your entire cluster, that part of the risk will be larger. But the other part, equally important is based on the exploitability. So how easy is it to take advantage of a misconfiguration or a risky configuration? And that's based on whether it's a local which is harder or remotely accessible and whether it's easy or not to exploit the issue. So we just add both of them and that gives you a risk for an individual risk role. So you might have a risk that has very high impact but very low exploitability and the overall score might be lower or the same as a risk that's medium but that's very easy to exploit. So the first step is we create a multiple risk core for each setting of one workload. Then from this risk, we compute the score for the entire workload. And the way we are doing it today, working on the version but what actually works was very well is we look at all the risk and the attack return scope. If two risks share the same attack return scope, we take the maximum score and we add them up and take the square root. So that give us the workload score again from zero to 10 based on the individual risk that we computed earlier. I've mentioned risk so far, right? I said that there's also remediation. So when we look at all the individual risk, we try to find a matching remediation and what we call matching today is remediation that has the same attack vector and the same scope as the risk. So the risk has cluster scope and remote, remotely exploitable. We look for remediation that have the same scope, cluster scope and remote remediation. So once we have the remediation, we basically lower the impact by the remediation. So if a risk was high for confidentiality and remediation was low for confidentiality, then we modify the risk to be medium one notch down. If we had high risk but higher remediation, we don't go all the way to none because typically remediation don't exclude one percent of the risk but lower it to a very low level. So we go from high to low. So we basically do risk minus remediation. We have this modified risk and that's the risk we use in our formula to step one and to be used for the workload. So KCCSS is a framework that comes with a list of rules but the idea is that you're not going to run it by yourself trying to manually map these rules to your clusters, manually run the formula, but instead that they're going to be tools that do the work for you. Scanner that will look at your workload configuration, map it to the KCCSS rules and give you this score. And along with KCCSS, we have open source cube scan which is container scanners that comes as a container itself that you install in your cluster. It scans the workload that are currently running in your clusters and show you the risk core and the risk details in a nice web UI. But the idea is that there could be other tools that are created later by us or by other people who can take advantage of KCCSS and show you the same type of results. Okay, so that's the theory. Let me show you how it looks like in practice and I'll do a demo and at the end of the demo I think I can answer some questions. Hopefully they will clarify anything that's shown so far and then I'll conclude with a couple of slides. So before I show you cube scan, let me show you the cube scan data page. Since it's public repository, you can Google for cube scan data, you cube scan October in and you'll probably find this page first. So this here is description of the project. It has a link to other KCCSS framework. I'll show you the data page in a minute. I'd explain how it works, what you're going to see with screenshots, but probably more important for you, it explains how to install it. So obviously you can always compile everything from source and create your own Docker file, but we've uploaded the container image in a repository. So you can take advantage of that and just do a cube CTL apply and install the containers in one comment. We have two ways of installing cube scan. The more secure way is to install the containers and do a cube CTL port forward to access the web UI from your computer. And for some reason, you want to expose it to colleagues or to other people. It's also possible to use the other type of installation that includes a load balancer. So it's exposed. The web UI can be exposed. Just be careful to not expose it to the internet. You don't want to give away too much information about your cluster. And when you're done, you can again with one cube CTL command, you can just delete the pod and the pod when you're done. So I was saying that there is also a link to the KCCSS. I just wanted to briefly show you the information we have here. So we have information about the project. Mostly what I've described in the slides. How you can create your own rules. We really made KCCS as easy to extend. So you can create both new risk rules, new remediation rules and really hope that we are going to have more rules that describe how open source solution or even proprietary solutions can improve your security posture or sometimes how some application that you install have additional risk associated to them. So I think the corresponding case is KCCS rules. So you have an exact understanding of your cluster. So I was mentioning the rules. I'm sorry, getting lost with the shortcuts. Okay, here we are. So under rules, we have risk accommodation, the whole YAML file, we have a couple of tools that can validate and so you don't have too much old information since the rates, the score are completed automatically. You just need to fill out the description and the impact if you want to create new rules. We have the same for aggregation and we are looking at adding more for specific tools. We have a wiki that's the last thing I wanted to show you on GitHub where we have more information. So specifically, if you're familiar with CVSS, we have a more in-depth comparison of KCCS and CVSS, more explanation about the different fields in rule and some information about how to contribute. Okay, so let me show you Qubescan. We have installed Qubescan in our demo environment. We have about 50 different workloads here. So when we install Qubescan, for the first time it's the configuration of all the workloads computing the risk and we can see that we have risk ranging from eight all the way to five. So we can already know that the most risky one is this echo A deployment. We have a couple of set or set to show you the type of commanding subject that it is and where it's located in which namespace and this is just for one cluster. So we don't show the cluster here but that's the question when you installed Qubescan. You can click on the score here and it will give you the list of all the risk and remediation and we can take a look at a couple of them. So we see that the highest risk here is that we are mounting some horse pass in the container with right permission and that the sensitive horse pass directory. So we can click on show more and it explained exactly why it's risky. So this is about mounting sensitive horse pass like slash, varran, docker, that's all. So one of these horse pass that we don't want to be mounting in a container because it can give the container access to docker, modify how docker is running, lets you maybe through socket file interact with application, read secrets, modify binaries on the host. So all of the different risk in the different categories are explained here. It's very easy to exploit like it's just about reading and writing to files once you get local access and you can potentially impact the entire note, not just the container. So you can go through a list. Again, it's very interesting as seen for education. If you have no team of developers who are not necessarily aware of the risk associated with the many different container setting that they have to set. Natural is another good example at the file that can craft any kind of packets. Means you can do man in the middle attack. So that's why the impact on confidentiality is high. So you can go through a list and we can see here that there are no remediation, sorry, only on your risk. If we look at some other workload that has a lower risk, like shoes here. So we see that there's also some risk but it has a couple of remediation that bring down the risk quite a lot. One is the fact that there's no listening port. So this service is not listening to any incoming traffic, which means it's remediating basically all kinds of remote attacks by not accepting traffic. So there's not actually any vulnerable, any risky configuration that has to do with remote access. But if we had any, that would be a very good remediation for all of this risk. Same thing, it has a service mesh. In this case, it's opt-in but could be Istio or anything else with encryption. So that means it's now much harder for any workload that can sniff traffic to get any content. So it remediates specifically the confidentiality and not so much the other types of risk. So again, that's very interesting, I think to really understand, if I install a service mesh and I do enable encryption, what kind of risk do I take care of and what kind of risk actually remains? So service mesh is not security answer for everything, it's a security answer for a specific type of risk. I think that's it for them. Oh, actually, no, I wanted to show you another thing that we see quite a lot in many users. So this one has a median risk. Actually, it should be high it's actually a bug that we're going to flip seven is supposed to be high. But what's interesting here is that it's a workload that's exposed for an external load balancer. So potentially accessible through the internet which by itself, again, is not necessarily a big deal. There are workload that are supposed to be exposed to the internet, but it does not have any CPU or memory limits. So what happens if you get a dose on this workload that's accessible from the internet and you don't have any kind of rate limit in front, then you potentially will be using too many resources on the pod and on the node, sorry, and the node is going to try to reschedule other pods on different nodes and you may have cascading failures. Also what's interesting with having something that's exposed to the internet is that you are potentially chaining local risk with remote access through the load balancer. So if you have any kind of vulnerability in your code, in your own application running in the container in the OS, that can be used to chain remote access with local vulnerability. So that's also something you want to pay attention to when you have a very large or a lot of privilege, local privileges, making sure that they are not accessible remotely. So I encourage everybody to download kubescan and try it in their own cluster. You can just remove it when you're done. It's open source, you can look at the code. You'll see that we don't export any information. So it doesn't connect to the internet. So you can even run it in a gap environment. Nothing is being sent out. It's running 100% locally without any internet access. The only thing is incoming traffic so you can actually access the web UI. So that's it for the demo. I have, I think, two slides to conclude, but I think that's a good time to answer some questions if there are any. So I can... Oh, awesome. Very good. So, Julian, if I heard you correctly there at the end, you're saying that this open source security project is in fact itself very secure, right, Ruth? Yes. Okay. Thank you. Okay, great. This is a fantastic presentation. We've got a few different questions that have come in in the time that you've been giving it. So very good. So a fair bit of interest here. Let me toss a couple of your way. And this first one came in a little bit earlier, but the question is asking how is it that the workloads are enumerated using Cubescan? Yes, so that's a very good question. So we often ask, what do you look at with Cubescan? Do you look at configuration files? What happened if I install my workload with hand charts or operators? So Cubescan look at the runtime configuration of your workload. It doesn't matter if your workload was installed with a YAML file or with hand chart or if operators are making any change. It's looking at the runtime configuration through Kubernetes. Makes sense. Very good. There are... We've got a couple of collection of questions that are somewhat related to one another. So I'm gonna conflate two of them and let's see if there's a difference. The first one's rather straightforward. It's a question about compatibility with OpenShift and whether or not Cubescan is compatible with OpenShift. So, yes, it works on any cloud provider also with OpenShift. It's using the standard Kubernetes API. So it will work with any Kubernetes distribution. Nice. On the topic of compatibility, and another question thrown in there is, is compatibility with AWS Fargate tasks or like Google Cloud Run instances? That's a good question. I don't know if you have tried it. I'll check. I'm not sure if we have tried it on Fargate or Google Run. Fair enough. Very good. Back to the... And I guess, like I said, we're gonna conflate two things. There's a couple of questions about pod security policies or PSP enabled Kubernetes clusters. Do you know if Cubescan has been used in that environment? Is there a compatibility there? So I don't know if the question is, how does it look like at the pod security policy or does the pod security policy with PreventCubescan from running? So the answer should be, it should work with many strict pod security policy, but also it really looks at the current code which will be enforced if you want by your pod security policy, but really it looks like it looks at how the containers are actually running, not the policy that's being said, but really how they are running at that time. Okay, got it. And I think some of the attendees are interested in just the further into the future. So I think we're gonna get into the furtherance of Cubescan and its compatibility with, or its cognizance of pod security policies. So very good. Another, just lots of questions coming through. So another question here is whether or not it's possible to restrict Cubescan to a specific namespace? So not yet. We have an issue open for that. So we are looking at doing something like that in the coming days or weeks. Right now it looks at your entire cluster. Oh, very good. Okay. You can, if you're only interested in namespace, you can do a filtering by namespace. So you can organize them by namespace and not by risk. But yeah, scanning, there's an issue open for just running it on specific namespace or namespaces. Okay, very good. Another question here has to deal with whether or not the configuration items align with CIS benchmarks. So that's interesting. So yes, we looked at CIS benchmark to make sure that we had rules that covered everything. That's one of the reasons why we added some of the RBAC rules. That's the reason I checked them in yesterday or Monday. So everything that you will see in CIS is covered and much more. So everything related, that should say everything related to containers. So in the latest CIS benchmark 1.5, if I remember correctly, it seems like 5.x, 5.3, 5.6, 5.7. That is part of it. We don't reference the CIS benchmark necessarily, but it is covered, yes. In the end, CIS benchmark is also about container settings, at least in section 5, and that's what we covered. Okay. A related question is that whether or not the attack vectors align with the MITRE attack framework, the ATT and CK framework? Yeah. So we used the attack vector from CVSS, and it's only local and remote. We are adding something that's a bit similar to one of the MITRE framework that classify the types of attacks. So again, we do it specifically for communities, and we are starting to add that kind of information, a better classification of the type of risk to improve the formula, especially to improve the way we match risk with remediation. So we are adding categories like secret exposure, lateral movement, Kubernetes, privileged escalation, this kind of thing. So that will allow us to have more granularity when we match the risk and remediation and for the workload formula. So it's the same spirit as the MITRE attack framework, but it's very specific to communities. So there will be fewer categories, and there will be a bit different than this framework. Okay. Understood. Very good. Well, there's just a couple of final questions and I think just some interest and feedback on plot security policies. And so the question here is, or the note here is that there may be multiple PSP policies in the environment. So some for infrastructure components, some for tenants. And the question being, you know, whether or not CubeScan ensures that the correct PSP policies are taken into account while evaluating risk. So. So I see again, because we're looking at what, how is the container running and not how it's being configured originally? You know, it includes any kind of change, whether it's for a policy or again, operators or anything. So in the end, what we're looking at is the current state of the workload, not how it was originally configured, but really how it's running right now. So that would include, know what policies are being enforced because it changed. It might change how the configure the workload is running, but really how it's running right now and not how it's being set up or configured. All right. Very good. I think we are, we hit the bottom of the bucket on our questions. So Julian, if you had had a couple of other slides, we're ready for those. Let me conclude. So I just wanted to talk about what's, what's next. So I already mentioned that we are looking at better ways to match remediation and risk. And that's mostly by introducing more categories. We're always looking at making the workload risk better. We want to add more rules. We just added a couple of on the back this week. And we're looking at a few more on secrets environment viable, for example, and we welcome any suggestion. I'm sure we are, we are missing some of them. The next big step will probably be to expand outside of Kubernetes. Because typically a Kubernetes cluster lives in the bigger environments where you have, they have other controls, especially on the network side. Seems like load balancer that are in front of all the ingress traffic, network policy, that also in the end, and change the risk profile of your, of your cluster. And finally more, more tool for KCCSS. So I think, you know, outside of cubes can just having a better understanding of the risk, how the risk interact with each other, what kind of remediation you can apply. I think that's very useful. So we wanted to make, make it easier to explore the KCCSS framework. So stay tuned. Next couple of weeks, we'll have more, more tools including online tool to do that. We just checked in again this week, a few basic tools to validate and start playing with this roles. So if you look for KCCSS, you should be able to do a simple search for Github, KCCSS, or OpenKCCSS. You'll find the Github homepage. You can also go to our website, openKCCSS.com. And at the very top, there is an open source tab. You can click on it and you'll get information about cubescan and KCCSS. If you have any questions that either I could answer here, I was not clear on, just a new question that comes up, but not hesitate to email me. My email address is here, Julian with the e-mail. Feel free also to open issues. We are looking for PRs and, you know, new roles. So I hope some of you will contribute to KCCSS and cubescan as well. This is a great presentation, Julian. We did have another question come through and just, I think, another Kubernetes engine user, just looking to affirm compatibility or just, I guess to sort of answer that same question again, does cubescan work in GKE? Yes. So I think the demo that we showed was actually in GKE. The demo environment is in GKE and the container was installed in GKE. So yes, AWS, Azure, Google, OpenShift, they work for sure. Bargate again, Google Run, we haven't tried. Specific environments, you can let me know. I think it doesn't work. Either open an issue directly on GitHub or send me an email. Nice. Okay. Very good. Another question was whether or not there's any plans to get this integrated to the community as an off the shelf product? So yes, we haven't engaged with anybody yet. We wanted to make sure that there's some maturity in the project and that we have enough feedback, no exactly what direction we want to go. But yes, we'll be looking at CNCF probably and see if we can put the project under that umbrella or some other open source organization. Very good. All right. I think that, you know, those are all the questions that we have today. Well, so thanks so much all for joining us on the CNCF webinar. The webinar recording and the slides will be online later today. And so we're looking forward to seeing you all at a future CNCF webinar. And thank you so much for telling us about KCSS, KCC SS Julian and cubes can. Thank you everybody. Thank you. All right. Very good. Have a good day everyone.