 Okay. Thank you, everyone, for joining us today. Welcome to our live webinar for CNCF. Kube Clarity, Bringing Clarity to your Kubernetes Artifacts Security. I'm Libby Schultz and I'll be moderating today's webinar. I'm going to read our code of conduct and then hand over to Alexey Kravtsov, Software Team Lead and Zohar Kaufman, Director of Engineering. Excuse me, Principal Engineer, both with Cisco. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. But you can see our lovely chat box on the right hand side. Please feel free to drop your questions there and we'll get to as many as we can. This is an official webinar of the CNCF and as such a subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct and please be respectful of all of our fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They are also available via your registration link that you use today and the recording will be on our online programs YouTube playlist on the CNCF channel. With that, I will hand things over to Alexey and Zohar to start our presentation today. Thank you very much Liby for this introduction. Good morning, good evening, good afternoon everyone wherever you are in the world. I'm Zohar Kaufman. I was the co-founder and CTO of Portshift. It was a startup in the Kubernetes security area that was acquired one and a half years ago by Cisco. And together with me today is Alexey Kravsov. Alexey, would you like to introduce yourself? Yeah, sure. Hello everybody. My name is Alexey Kravsov. Also from the days of Portshift, so we're leading the enforcement controller team and mainly active in the Kubernetes fields also open sources like Cube Clarity and API Clarity. Thank you. Okay, so today we will talk about how we can help you bring clarity to your Kubernetes artifacts. So before diving into the material that we organized for you today, I would like to discuss two big mishaps that happened in the last year. So one of them is Log4j. So Log4j is very popular. How popular? Every Java application or microservice that is out there is usually using Log4j. And it was discovered a very enormous flaw. It was just before Christmas last year. It allowed attackers to execute code remotely on the target computer, which would let them steal data, install malware or take control of all the old system. So there were hundreds of millions of attempts to attack and major breach were disclosed. And so let's go back to December 2021. I'm an owner of such an application that is using Java, that is probably using Log4j. How would they know if I'm vulnerable or not, if I'm actually using Log4j or not? So it's a good question. We will answer it in a moment. Another very big happening or mis-happening is the dependency confusion. So it happened again. It was publicized on September last year by a researcher named Alex Berson. So he is a nice guy, tried to hack into companies by contract. So letting them know that he's trying and winning money if he succeeds. And of course disclosing everything that he is doing. So it's all legal and everything is fine. So he tried to do it using packages. So any software now is using lots of packages. Some of them are probably open sourced. And these open source packages can be hosted either privately in the local registry of the company or maybe in the public. So he wanted to upload his code to the public registry and actually tricking developers in order to use it and then running code inside their applications. So he had a very nice trick of letting him know that somebody is using his code. He encoded all the information of the host and developer and everything inside a DNS request that was sent to him. And then he knew if such a DNS request followed through then he knew that his code is running somewhere. So the first thing he tried is typosquoting. It's an attack that leveraging typos. So I'm taking a very popular package. I'm doing like a minor typo and uploading this package with this typo in the public repository. So if a developer is instead of Q Web that is depicted here, he just prints Q Web with a typo then maybe he would use my version of the library. So this was not so successful. And then he thought about something much more clever. So there are package managers that are used widely within different languages. So for example, there is a pip of Python is using PyPy which is Python package index. And these package managers, if they are giving the relevant option with PyPy with minus, minus extra index URL, then they are searching also the public repositories and not only the private ones for the packages, and they are doing something that sounds logical. And just looking at the for package in all places and using the latest and greatest version of that package. So it sounds, you know, logical. I want to use the latest, but it's open up a door for hackers. So let's assume that I have a package that is only on my private registry. It's not anywhere in the public. And the version is like here 2021.0.3.1. And now somehow, someone knows that I'm using this private package. And this someone, Alex Burson in this case, is uploading a public package with the same name to some public repository, but with a enormously big version. So if someone is using this parameter and this package, then they will use Alex package and not their own. So he was not sure it's going to work. The way that he knew which packages are used privately is using in JavaScript, you have a filing name package.json. So it contains the name of all the JavaScript product dependencies. So some of them are public, some of them are private. So this way, he knew a list of private packages used around Microsoft, Apple, and various other big companies. And he tried them all and he uploaded many, many packages to the public repositories. And surprise, surprise, almost all the companies he targeted, he managed them to run his code inside their applications. And while you trust someone, someone code, then they can do anything on your machine. So this was a very, you know, very famous, very famous incident. And it raises the question, how do we know which packages we are actually using so we know that we wanted to use the private ones but which ones are actually used. So taking this into account, let's review our agenda. So first of all, we'll pinpoint the problem that we are going to solve, you know, bringing clarity to our runtime Kubernetes. And then we will talk about vulnerability detection challenges. We will introduce an open source named cube clarity and review its high level architecture, do a cool demo and also talk a little bit about the roadmap and of course we will be happy to answer any questions that you will have. So problem statement, we want to scan for vulnerabilities. And so the first thing that we want is to know the building blocks of our software. So which packages are we really using. And again, think about the dependency confusion. It's not such a trivial question. Second, we want also to detect vulnerabilities in these building blocks. So even if it are the these vulnerabilities are discovered post deployment so maybe see in CICD everything was okay but in runtime, during runtime, the log for J issue is discovered and I want to know that I'm now vulnerable. I would like to correlate all the vulnerable building blocks across my applications. So again, if we take take look for J, I would like to know all the application that I'm using a production that are actually vulnerable or using this look for J flawed version. So we want to do a remediation of all my applications following the discovered vulnerability. But we have few challenge before we introduce a solution to that we have we have few challenges that we need to solve. The first one is the vulnerability detection challenge of software bill of material, or in short as bomb. So as bomb is the base of any vulnerability system. So we need to know our building blocks and if we know the building blocks, then we, we can list the vulnerabilities for each of them. But it's not that easy to know the S bomb for our packet all our software applications and packages. The problem the challenges here is that various program languages are used and each of them may use different package managers. Some of them are listed here for example Java using Maven node is using NPA NPM Python is using pipeline and there are go models and the others. Next, the OS can also introduce vulnerabilities and are different than various OS distributions and and also their package dependency information, and it's not a while building the image probably we will strip it. So if we did just take the image that was produced by our CD maybe we lack this information of software bill of material. So taking all this into account, we need, we will see in a few minutes our solution to that. But let's assume that we have the, the software bill of material the S bomb. And now we just want to know which vulnerabilities are there in these list of packages. So, there are also challenges here. So, there are many, many vulnerability scanners or S bomb analyzer in the, in another name that can be used. Some of them are scanning containers other scanning directories of files. Some of them are better in specific languages, let's say Python others are good in JavaScript or Ruby or node. And there are of course, various linux distributions and maybe one scanner is very good at detecting vulnerabilities in the OS level, but it's not that good scanning JavaScript. So we try, we need to combine, let's say few scanners best of breed, but each of them has their own format, and maybe also their own way of ingesting the software bill of material, either SPD existed here or cyclone deeks. So it's not trivial to take all these and unite the results of few scanners. Next is, let's assume that every, every, we solved the two problems that we just discussed. How and when should we scan. So there are different phases of, you know, of the CI CD, there is a source code, and then it's pushed to a deep triple, and then we compile it and build images and then deploy and then have it in runtime. So each of these stages is adding actually is adding each of these adding a place where new vulnerabilities can be introduced and may where should we scan. So if we scan in all stages, maybe we are doing excessive work. Otherwise, if we are scanning only in one place, maybe we are losing information that we can gain if we scan for in other places. So, again, this is a challenge and we will see our solution to that challenge in a moment. So just to read just before going to the solution. So looking at the applications applications are built from application resources that are directories or images, each of them having packages and the packages may enclose vulnerabilities. So if we are good solution that to the challenges that we already described should should answer the question where should we store the S bomb and vulnerability information, how our applications are affected by new vulnerabilities that are discovered. And we, we should be able to traverse this graph very easily. So if I have an application, I should quickly know which resources and packages is it using and which vulnerabilities are in copas there. And also the vice versa, if I have a vulnerability look for Jay, I would like to know which packages application resources and applications are using this package. So this is it from my side and now we will turn to Alex say to show you our solution cube clarity. Thank you. So one second. Great. So, exactly for the problems that are described, we created this solution. So what it includes. So we tried to create, you can say an universal scanner and S bomb analyzers. So we, we actually use different scanners that you just gave him an image or the directory and they just calculated it for vulnerabilities. And then a newer scanner came which was better and we tried to understand how to compare this. And we eventually saw that if you get an accurate bill of materials that will lead yield in accurate vulnerability detection because if you don't analyze the certain package correctly. So you will definitely not find that vulnerability. So first of all, we split the vulnerability scan in two phases and each phase has its own benefits. Actually, it's not only for vulnerability detection. So that's first we treat images. We can analyze and scan images and directories. And it's also useful for serverless. So we can not only scan CI CD pipelines as described, but we can also scan runtime Kubernetes clusters, meaning all the pods that run in your cluster and I will show you that in a second. And so we don't try to be a scanner product. We don't try to create scanning logic or S bomb analyzing logic. Instead, we want to use the best tools. And we saw that if we combine the leading tools, we get the higher accuracy. And so that's what we did. We created basically a pluggable infrastructure to plug your own or the popular solutions for S bomb generation and also from vulnerability scanning. And so that's the pluggable and the universal scanner, you can call it. And so after that, after we scan all the resources, a bunch of directories and images, eventually we want to group them logically under certain applications. For example, I have my Kubernetes cluster and I have a pod that consists of five containers. I don't know, some heavy logic or somebody decided to do it. I want somehow to or even my application consists of several pods. I want to combine all the resources of the pods images, directories and so on into one logical application. So if I have an issue, I will know what application I should treat and maybe who I should alert about it. And of course, it's not only about vulnerabilities and as bomb gives you valuable information about the licenses that are being used in your software and also information about programming languages and more and more. And so the new and popular S bomb formats, they support a lot of major data actually on top of it. And so to address the scan stages, we can split like we said, we will splitting the content analyzer we call it the content analyze will eventually generate the S bomb using several S bomb generators. It will understand what packages consists from your in your image or directory and the output will be if the input is an image or file system. The output will be an S bomb and this S bomb can be used as the input for the vulnerability scanner and these are the two main components, the content analyzer and the vulnerability scanner, which again I'll show you in a second they all consist of parallel analyzers and scanners. And these are the two building blocks that we use across all our solution. So the first one is the content analyzer. So as I said, the input can be an image or directory, and then we need to actually the plugin or the converter needs to convert it into format of each analyzer. And we can also put S bomb as an input. So for example, if I want to analyze a my base image OS base image, and I want to put the S bomb of the code of my application I can merge the, the S bomb of the image. And the S bomb of the application into one big S bomb because that's eventually what is running in your application right it's the base image that you started from and the your logic that is on top of it. And of course we need to merge the S bomb result because there can be duplications and probably multiples analyzer will find the same so we need to flatten. And we will show exactly who found what and which one of the analyzer miss some and the same concept goes for the vulnerability scanners. We basically can scan a directory right away or we can use the S bomb that produced in the previous phase by the content analyzer. And again we need probably to format it to each scanner, how it expects it in order to scan the S bomb or the directory for vulnerabilities. And again it's kind of outputs the vulnerabilities in different format they are on private JSON or popular standards, maybe like Cyclone Beaks. But eventually we need to format and merge them all to get the clear result of all the scanners who found which vulnerabilities and some filtering logic of course to get the final result. And you can mix and match and spread these two building blocks across different phases. For example you can analyze your code when you build in the application you can analyze your image when you build in the image and then you can merge everything and to scan this one S bomb once after you have all the packages and all your dependencies across all the phases in your CICD pipelines. And on top of that like I said we offer runtime scans which I'll show you in the second it can also be used for serverless and also I'll show you how we manage all the resources I mean the images and how they belong to application. And how can I traverse the object tree that's all showed meaning how can I see what application has what which images, then these images consist of which packages and these packages links to vulnerabilities and any way you can just imagine in this tree. So that's a high level architecture of the whole solution. Basically we integrate with CICD pipelines like I showed you with the SCLI using two commands scan and analyze. We have a user interface that will show you in a second that the user can interact with. We have a database of S bombs just to cache stuff because we saw that the heavy logic is to fetch and extract the image and to go over all the layers to understand exactly the content so we keep this database for better performance and I will show you in a second what is the benefit of it. And we have the runtime scan orchestrator that basically spins a job a Kubernetes job for each unique image in the target namespaces and that I want to scan. And once each job is complete scan and analyzing again we have these two building blocks also in in our jobs. So the report the results are reported back to the back end stored in the database. And so if I have this one pod that consists from image one and two two jobs will be created for each image. If I have these two replicas this pod into replicas so I will do the same thing. I mean just once I will spin only two jobs in total each one for a unique image in the container. So what what is it good for so we saw that the scans are much faster now because to if you have already the S bomb which again the main consumer to produce it so it decreases from minutes like five minutes or even more for big images to just a few seconds. And that means that you can also integrate these tools in your admission control in Kubernetes where they are limited to a very low amount of time it can be said 10 or 30 seconds depends on the configuration. And again you can discover your vulnerabilities across your stages and not only after the image was built for example and you just prevent it from being pushed. And like Zor said some of the packages may be scrapped from the final binary and you just lose these dependencies and obviously the vulnerabilities and not lose the vulnerabilities but lose their detection. And eventually we saw it on ourselves so we bet on one scanner to be the best and then another scanner became the leading one and we needed to change everything and we said hey why not just plug in more and more scanners and then we will have the optimal solution and we actually saw that we. Got the highest detection percentage this method and I mentioned serverless again. And so cool so let's go to the demo. And show you. And so here is the repo and keep clarity under open clarity with some other interesting projects. And so basically it's everything that I described and it lists all the features and all you need to do to have it running in your Kubernetes cluster you just need to add the help repo and then. And you can create the help values you can configure it as much as you want you can set different analyzers or scanners and to configure a lot of stuff there. And then just a simple helm installation and then what I'm doing here I'm just port forwarding into two cube clarity is UI and I'm accessing it on the local port. So, I hope the port forwarding is still alive. Looks good. And that should be the UI of cube clarity. Great so as you can see this is the dashboard of cube clarity we try to make it as actionable as possible meaning we don't flood it with information but all these things are meant to be a really actionable and allow you to fix your vulnerabilities and gain insight of the components of your software. So for example, here we see I already ran a few scans before, but here you can see that I have this amount of vulnerabilities mind cluster. These out of these vulnerabilities I have 438, which are fixable. And here we sort them according to severity so for example, if I have critical vulnerabilities which are solvable like you can just click on this. And it takes me to the vulnerabilities page with all the filters are set only vulnerabilities which are critical and have fixes, I can sort them according to. Sorry, so here it is only critical but if I sort them, it also also sorts them according to the CVSS, in this case, 998. So the same goes for for the other severities if I want to tackle all the high severities separately. So here you see that I maybe want to try to handle this first, and maybe the lower CVSS correlator. And again, I can apply all the filters upon all fields here and columns. So if I'm going back to the dashboard, I can see that I can show my top five vulnerable applications sorted by severities. Again, the applications are totally logical grouping and I will show you how it's helpful in Kubernetes and also non Kubernetes. I can see which are the most vulnerable resources. So these are the images that are most vulnerable. I have this image with 31 vulnerabilities. Again, everything is clickable and I can go to the vulnerabilities. And I see that my free type package, for example, is the most vulnerable package in my system. So I can go also from here in a second, I will show you how I can traverse all the object tree and prefer scans and all that. So just another look. So here we have information which is not necessarily related to security. So here we count and show you the packages by their open source license or license. And then you can see some licenses that are the distribution. For example, if you want everything to be MIT or Apache or I don't know whatever you want. So you can monitor it here and of course, everything is clickable. So you can check and go filter these packages. Everything is filtered by MIT already. So it's not just to show you data, but you can also focus on all your Java applications, for example. And you can look at it this way. We also added here a new vulnerability trends. And this is especially useful because we saw that if you have, for example, a periodic scan that scans your cluster every day. So you scan it and scan it and each time you have 1000 vulnerabilities. So each scan, you will see 1000 vulnerabilities. Meaning if a new vulnerability will be discovered, you might miss it because what is 1000 versus 1001. So here we created a new vulnerability trends. Meaning if you one time scan 1000, great, it will be this kind. For example, in the next kind, if nothing new or discovered, you will see a flat line until a new vulnerability will be introduced. So this way you will, you are not distracted by existing vulnerabilities that you already attended and you only focusing on newly discovered vulnerabilities. So that's the dashboard. So as I said, here we have applications. For example, in runtime scan, all these were detected automatically. In this case, these are pods that are running in the Sockshop. This is the Sockshop demo application. It is running in the Sockshop environment. And these are the labels. All of these fields and information was filled automatically. Following the runtime scan, I will show you how this happened. Second, I didn't mention it, but we also do CS Docker benchmark on your images. So you can see the best practices of the way you created your image or you missed something in the Docker file or something like this. You can choose it all both in CI CD and in a runtime scan. I'll show you in a second. And of course, as I mentioned, you can traverse these objects. So for example, I have my user DB application. It has 113 packages. So if I click on this, this basically the S-bomb. But I can go first to the resources, for example. I see, okay, this is the image that the only image that belongs to this application and can go up the tree back to the application. So I can go back to the resource that I just saw. I can see all the packages of this resource and I can jump to the vulnerabilities for this package. So this is how I traverse the tree that Zor showed in the slide. I can go in every direction. So I can see, okay, this vulnerability, for example, in the log4j example that Zor gave. So I can, okay, I discovered, let's say this one is log4j. Let's see all the images that are affected by log4j and say, okay, these are the images. Cool. So I need to probably if I'll fix this application resource for CVE, this and this. So I will know what to treat in my image. And again, if I go going from packages and I can go to the application resources. So I, sorry, I clicked on drill down. I forgot to mention in each line, if you go, if you click on it, you see some more details about it. And here you can list all the applications and resources that use it in details. And whatever the image hashes, you can click on it. Of course, it will lead you back to the image. And here is the interesting part where we actually create a competition between SBOM analyzers. So we can see for each package. For example, if I executed several analyzers, I didn't mention, but we currently support SIFT and GOMOD DX, GOMOD, Cyclone DX analyzers. And in vulnerability scanners, we support the gripe and the dependency track. And we plan to plug more and more. And we, of course, welcome the community and everybody who wishes to donate more and more scanner integrations. So here you can say, okay, this scanner was better for JavaScript. Because this package wasn't detected by GOMOD or whatever. And so that's, that can be also useful for this. And yeah, so I basically that solves the management of your vulnerabilities and SBOMs and the traversal between the affected elements. And to see if you're interested in operational information like coding languages and license types and so on. So you can traverse it in any direction that you want. Of course, each screen here has filters to find exactly what you need and sort according to any property here. So I'll just show you real quick the runtime scan. I already executed it. So here we actually need to, we will introduce a schedule scan really quick in the upcoming days. So currently the only option is whether you want CS Docker benchmark or not. So for example, if I disable it, and I just start the scan, I can select the scope of the names. These names are detected automatically in my cluster. So if I want to scan, I want to add, I don't know, Istio system, I can remove it and I can just initiate the scan. So basically what is going on currently relates to the high level architecture that I showed you. We discover all the pods in the namespace. We discovered the unique images, spin a job for each image. And then we check in the SBOM DB and that's why the result was so quick. Now when I scan more than 10 images now. Yeah, so I see that 13 images were affected by the scan. I can also filter by application by vulnerability severity. For example, show me all the resources, elements, sorry, that are affected by critical vulnerabilities. So I have 42 critical vulnerabilities, which I can go there. As you can see, we have something that we call a system filter that show you the context that we are watching the screens now. So we reuse the same screens. We don't put the information in different places. So now I'm focusing on the vulnerabilities. So instead of each time going back here and clicking on packages, so I can go from here to vulnerabilities, back to packages, back to affected applications. And again, if I want to narrow the search, I can do it with a more critical vulnerability. As you can see, all the filters are set automatically here. And I can also delete this filter. I can delete this filter and I'm going back to normal as if I clicked on this screen. So yeah, just let me show you if I enable the CIS Docker benchmark again. So here you don't see the section, but if I scan it again, so it should take a bit longer because we are doing extra work here. It's not only S-BOM and vulnerability detection. We also benchmark for CIS Docker this time. So I hope that in a few seconds we can show the results. And again, all this is under active development. We change this every day adding more and more features to it. So yeah, this time you see the Docker benchmark. For example, you can filter it not only vulnerabilities. So Docker benchmark not related to packages and vulnerabilities, it's related to images. So I can see all the images that have fatal warning, for example. And again, I can drill down and see the exact reasons for that. So one last feature that I want to show is the integration maybe with CICD pipelines. So I can create my application manually. For example, demo app. I can say that's going to be a pod, because it doesn't really matter. App equals demo. I don't need to set the environment for now. So I can, we can search it from here, application name, for example, contains demo. And this is my application. So as you see, no vulnerabilities or Docker benchmark or packages or anything, because I don't know anything about it. I just created it. So I want to scan it in ICLI, which basically mimics the CICD. This CLI can be used in CITD pipeline. So again, everything is the read me of the project and just created it quickly. So I want to type it. This is the forwarding screen. Can you still see my screen? Zor? Yes, we can still see your screen. Okay. Sorry, I clicked on something. I hope it didn't hit the share. Okay. Cool. So basically here I copied all the CLI to analyze the NGINX, for example, the NGINX image. I just need to provide the application ID, which is created here. So, yeah, it could take, I don't know, about several seconds or up to a minute to analyze. Okay, great. It was quick to analyze the NGINX image. And so this actually produced the SBOM demo app dot SBOM. That's what I did. So you can see it here. So let's scan this SBOM file using our scanner. So I can also control the scanners that are used for simplicity and quickness. I will just use gripe for now. So basically it tells you to scan the demo app dot SBOM and the input type is SBOM and not an image. Again, I'll need the application ID here. I forgot to mention everything that you see in the UI. We have an API for it. So you don't have to go to the UI and stuff like this. We have a swagger and a generated code that you can create your own tools programmatically to fetch all the required information. So that's exactly what we use in the UI. So now I scanned this SBOM. And also if I want to also scan it for Docker file. So sorry, CS Docker benchmark. I can run this additionally in the app ID. So this also should take several seconds. And I will show you if I refresh, I will have all this information in the UI and actually in my backend. Because I use the, I'll show you, I'll use everywhere the minus E flag, which basically tells you to export the information to this address, which is currently local host, but it can be anything. I just use port forward. So if I refresh. So you can see that all the licenses are being analyzed. I know all the vulnerabilities, the CS Docker benchmark of this image. I know the packages. I can see the SBOM and all the vulnerabilities again, sort them and do whatever I want. Probably something was affected here in the newly discovered vulnerabilities. But I think that's pretty much overall view of all the pictures that we have. And just a quick word about our roadmap. So of course we're planning to integrate additional SBOM analyzers and scanners because this is the core of the tool and the idea to run as many as possible and the best that suits your needs and programming languages and always distributions to get the highest detection results. So we are working actively to integrate with supply chain security and six store and image signing. And all that tools like in total cosign and all the six store elements. So that's what we actively working on now. And of course system settings and user management is always nice. So that's the next to come. Thank you very much. Thank you. Yeah, let's go ahead and take some everybody pop them in the chat box if you have any. So until they will be populated, I'll just mention why we cared. This is because we are using this open source in our own soft in our own offering. And of course we are welcoming everybody in the community to contribute to that. So you and others may also use this generic scanner that is bringing you the best of all worlds. Anyone have any questions? There we go. Yes, so great question. The question is do we have repo containing fixed open source package versions under fixed pack mediation. So great question not yet. So we have fixed version and you can go to the external links that are known in the vulnerability database. But that's a great item. I think that we can add on the map. So we brought the remediation to the point where we detect which are the vulnerable elements and what application it affects. But I think that it will be great to maybe automate some procedure. Like maybe dependable is doing in GitHub that says, oh, I created already detected all the fixes and I prepared the full request for you just to prove it. So maybe for environment that don't have dependable and tools like this, maybe also can be useful. So thanks a lot for the advice. All right. Does anyone else have a question? Going once, going twice. Okay, well, with that, we'll go ahead and wrap up. Thank you so much, Alexi and Zohar for your presentation. Looks like it was very concise and everyone is pretty clear. If you want to share in the chat any channels where anyone can reach you or follow up with any additional questions, feel free to pop those into the chat now for everyone. And if not, we will see everyone the next time around in our, in another CNCF live webinar. Thank you both so much for hosting. Thanks a lot, Libby. Thanks a lot, everybody. Bye bye. Bye.