 Hi everyone. Welcome to Cloud Native SecurityCon North America 2022. This could have been an in-person discussion but due to our limitations, let's enjoy it in a virtual mode. So now let's go ahead and what is this talk about? So this is a panel discussion on the title, say hi to the new couple in town, Dokislam and Kibarno, making your Kubernetes workloads more secure. Next slide. Who do we have with us? First, I'm Luika Bulani. I am a final year engineering student who's studying information technology. I have been an LFX mentee to cross-lane project and also a technical writer to SIGNOS which is an open-source ATM. And this is my first KubeCon to Cloud Native Con as people and I'm so beyond excited. Also, before we go ahead, I've heard a lot about what happens when you develop applications, get them into containers, but sometimes forget to harden them before shipping it to production. And obviously, this talk is somewhat related to that and by the end of the discussion, I hope you and I both have answers to it and find something interesting. So honestly, today I'm here to learn from my expert co-speakers who are Mithun Jai and Shooting Jao. So Mithun Jai, could you please introduce yourself? Thank you so much, Luika, for that great introduction and welcome everybody in Cloud Native Security Con. So introducing myself, my name is Mithun Jai. And of course, since we cannot make it in person, I've made the background of USA, of North America. So yeah, let's make it fun. My name is Mithun Jai as I already discussed and I just graduated from a bachelor's in computer science engineering. And currently, I'm working with Slim.ai as the member of technical staff. Previously, I have been contributing to like various open-source projects for about two years now as a Google summer of code mentee as an LFX mentee. And I've contributed to just like Kyron or Doc Kerslin both, which we are going to discuss today. So we're really hoping that it's going to be a great session for all of us. Thank you, Luika. Yes, thank you Mithun Jai. Now, our shooting, you may go ahead. Thanks, Rika. And thanks, Mithun Jai. And hello, everyone. This is Shooting Jao. I'm coming from Keverno. I'm a maintainer of Keverno and leading the Keverno releases and contributing to all phases of Keverno. And I'm currently working as a staff engineer in the Mata. Thank you, Shooting. That was a great intro. Now, moving on, we have a first tool, which is Keverno. So Shooting, could you please tell us about what are policies, what are Keverno and how does Keverno help in securing workloads and Kubernetes? Sure. So today, we'll first look at about policies and then we'll look into policy engines that are available for Keverno. So policies are really a contract of your shared environment. So let's take Kubernetes as an example. The most common practice of your deployment is probably running and maintaining them on a shared Kubernetes cluster. And Kubernetes is known as extremely powerful with the declarative configuration capabilities. You can specify manifests for pods, deployments, services, and others. So there's tons of settings of configurations can be done in Kubernetes and with that very declarative manner. And there are several different roles that might be involved in this scenario, like your ops team, your security team, devs that try to manage applications. They're all involved in some aspects or the other of this configuration. So here, policy helps separate those concerns across different roles. And because of that declarative native, the configurations do get very detailed. You need a solution like policy to validate or secure those configurations at scale. And really, the idea was policy should not just be for security, but also for automation. So how do we simply manage these Kubernetes configurations by automating as much as we can by taking away some of those, the need for coordination and those manual handoffs, and do it in a manner that extremely intuitive to folks who have already put in the effort in time to learn Kubernetes. So policy can help with all of those. So now we understand why do we need policies. Let's talk about policy engines, right? If you have used policy engines before, you may know there are options like Kibirno, O-Packet Keeper, Kubwarden and others, right? So today, let me introduce Kibirno a little bit. Kibirno is a Kubernetes-nated policy engine with no language required at this case, no learning curve. Policies are managed as Kubernetes custom resources, which is easy to write and manage. And Kibirno generates the policy reports based on the policy application result, right? It is also another custom resource that is available in Kubernetes, which makes it easy to access, to fetch and process. So Kibirno has the ability of validate resources. You can either block the resource creation or audit in the policy reports or to mutate the incoming requests or existing objects, generate additional new resources and verify image signatures. But Kibirno also supports all Kubernetes types, including custom resource. In fact, policies and policy reports are managed as CRs and Kubernetes, the policies can be extended to apply to any of the custom resources available in your cluster as well. And Kibirno leverages the Kubernetes patterns and practices, things like labels, annotation selectors, which can be used to select matching resources. The events, Kibirno generates events on policy application. It understands the owner reference. You can look up the owner reference. It also uses the owner reference for the garbage collection. And if you have the pod policies installed in the Kubernetes cluster, Kibirno will automatically cover the pod controllers. So many things that is available in Kibirno. But today, we're not going to dive deep into Kibirno and its architecture. Instead, let's talk about the top use cases that we've been collecting from the Kubernetes, from the Kibirno community. So the first thing come into mind, or the common use cases with saying from the community, is the ability to validate or secure the pod and workload security. Right? So you know, the Kubernetes has removed the support of the pod security policies. And instead, they've introduced the pod security emission to enforce the pod security standards. And with Kibirno 1.8 release, it has the native support of integration of the pod security emission controller. You can specify a new type of pod security rule to work with the PSA and use that to enforce the pod security standards. And trying to use the validate policies to enforce past practices that you don't want image tech to be set to obvious. And you can use the valid policy to eliminate misconfigurations and so on so forth. Right? And with the Kibirno generate ability, there is one interesting use case about multi tendency. Right? The generate ability in Kibirno, it helps you to set up the new namespaces or the virtual cluster. It can generate a bunch of RBAC resources as well as secrets, config maps, whatever resources that are needed to help you set up that virtual environments. Right? And Kibirno provides the standalone COI, which can be leveraged and used in your CSID pipelines to help validate or mutate resources before you push those configurations or deploy the applications into your real cluster. The image signing feature or the image verify rule, we've seen a lot of use cases that users would like to verify the image signatures as well as attestations. And a Kibirno has its subproject called Pulse Reporter, which integrates the, which consumes the Pulse reports that are generated by Kibirno or possibly other policy engine and it can push out the violations or send the alerts to the downstream targets. Right? So there are so many features that are available in Kibirno, but these are the common use cases that we've collected from the community and is used most. So with that, let me hand off to Rika and she will demonstrate one of the Kibirno policy and see how that works in a live cluster. Thank you so much, Shuten. Thank you for the brief introduction and telling us about the important use cases of Kibirno. So let me quickly share my screen. Is my terminal visible? Yes. Yes. So today we're going to talk about ad labels policy. Actually, I'll be demonstrating it to you. So by now I've understood what is a policy. So policy is actually a collection of rules. So as you can see underneath the spec, we have written rule and each rule could have either a match block or an exclude block. And in our policy, actually we have taken match for resource kinds. It could also be for resource names, user groups and user names. And also each rule could only have one task that that is either you could mutate the resource, validate the resource or generate the resource. So let's go ahead. The aim of our Kibirno policy is to actually mutate the label foo bar. I'll quickly share my screen. Yep. So I've already created a directory called Kibirno policy and have pasted this policy. So as you can see, we have spec rule and it would match to this spot service, conflict map and secret kind. Let's go ahead. Let's actually apply it. So as you can see, my cluster is up and running. And now I will go to the documentation and see how to actually install Kibirno. So as you can see here, actually they have given us two ways of installing Kibirno, which is directly from the latest release or using him. I'm going to go through method one, which is installing from the latest release manifest. I'll copy the code. I think it's copy and now it's installed. Now let's actually see. Yep. We're able to see a dedicated namespace of Kibirno where it's installed. Now I'm going to apply my ad labels policy. As you can see here, my ad labels policy is created. Qctl apply actually creates and updates the resources. So our next step would be to actually test it out. Now I have created a resource.yaml with the kind of pod and a service. So by now we're seeing that our policy is actually created. Now, let me just go ahead with the resource part. So in the resource part, I have actually taken pod and the services and I've separated this by three dashes, which is a yaml syntax. Now I'll actually go ahead and apply. As you can see my pod and my service has been created. Now to actually see if the labels have been mutated, I'm going to get a wide format in the yaml form. And now you can see my labels full and my labels full bar has been mutated. So till now it's been successful. Now moving on to the next tool, Mithunjay, could you please tell us more about containers and what magic does Toph Islam do? Mithunjay, I think you're on mute. Okay. Yeah. That is an age old Zoom problem we are used to now. So yeah. Okay. Thank you so much, Rekha, for that amazing intro and demo of Kyver. Now, moving to the next part, let me share the screen. I hope my slides are visible. Okay. Cool. So now that we know what is Kyver now and what is the one tool that we were talking about integrating here. Next, let's move to the next part containers and how does Toph Islam help us there? So we know that if you look at this graph itself, we know that containers have become the norm as cloud adoption increase sharply. Before we talk about how and why they are being adopted and why we need a tool called Toph Islam. Let's talk about what are containers and I'm going to talk about containers by reproducing or showing you something that I have learned from is Julia Evans, a beautiful Twitter threads and these images which you can see are actually from our Twitter thread explaining about containers. And most of my slides actually I'll be trying to help you out from something where I learned. So moving ahead, what are containers? So if you look at containers, containers are nothing out of the world magic. They are nothing but processes running on server itself, right? So a container is basically nothing but a group of Linux processes and they are very good at isolation. So they are good at isolation of the processes, but they are isolated against the Linux kernel. They are not isolated from the Linux kernel. That is a very important thing that we need to know today and that is why what we are going to discuss ahead can be important for us to know. So now that we know that containers are basically very, very close to the heart of the Linux kernel, we know that they are also vulnerable to attacks at the same time and how to control how to control and how to reduce the surface area of attack. There are various options for that. We can like maybe restrict the RAM size that we are going to give them. We can try to reduce their access to the disk. We can also enforce the number of syscalls. That is going to be the most important part that we are going to discuss. We can enforce the number of syscalls that one container might give you access to. So what are we going to discuss ahead? But the thing that we are going to discuss now is that now that we know a little bit about containers, let's talk about the problems that the developers face while working with containers. So making a container can be called still an easier task, but making them production ready is something that's still difficult. Why? Because again, what we talked about, if they are containers, they are again just in the kernel processes, right? And if they are processes, they are prone to attacks. How to manage it all? How to reduce your container size? How to reduce the attack surface? And also at the same time, something that we are discussing right now, but we are going to discuss it more detail later. Second, how to create a security profile that actually controls and filters out the syscalls that are going to be available to the container? And that is where this tool, Docker Slim, comes in action. So Docker Slim is one tool which is something that developers can consider it as a magical tool that provides them with a set of a lot of commands which helps them not only build a more minified image, like suppose if your image was around, just for the example sake that your image was around one GB, sometimes Docker Slim does such a task that it can be reduced to 30 MB. So that kind of reduction, that kind of image reduction, not only potentially, and how does that happen? It happens nothing but actually reducing those vulnerable parts. Something like if just for, we are not going into detail of the architecture of Docker Slim today, but just explain it like suppose you have those dashes open or those artifacts which are no longer required. All those things that can be up thrown at risk are just cleaned up from the end user. And that is why they may fire your image and not only optimize them for production, but also they help us creating an automatic sitcom profile app or profile for you. So you don't have to worry about knowing all the syscalls, knowing which syscalls you have to be restrictive about creating a byte list or allow and deny list of sitcom. You don't have to be a Linux syscalls expert. It Docker Slim does for you out of the box. So this is a very versatile tool and it has a lot of examples. It can minimize node applications, Python, and even now it's supposed compose vimes, like if you have a Docker Compose YAML, even there it works with different kind of services. So you can tell which service you want to basically minimize and it can help you minimize accordingly. So now that we know about this, I'll ask Rohika again to just give you a brief demo about what Docker Slim stand alone does. We'll be going to discuss later about both the tools together, but just a brief little demo about what Docker Slim does can be a little good idea for the audience to know. So Rohika, can you just show some magic being done by Docker Slim? Yes. Thank you, Mithunja. First of all, I learned a lot of interesting things about and facts about containers. So let me just go ahead and share my screen. So we'll be doing a demo on container image magnification and with the help of Docker Slim tool. So I hope everybody is able to see my terminal. So starting, I'll be sharing the documentation of Docker Slim. So as you can see, we have downloads and they have given us different options. Either you can do the zip package or there is also a scripted install, which I'm going to go through. So let's go ahead, paste it. So the idea behind us to create a container and then minify it using the Docker Slim build command. So I'll be creating the container. I'll get the images of Docker and then apply that image against Docker Slim build command. So now we'll be creating a Docker container. I'll do Docker run. You could also do Docker create and then start. I'll give the name of my image, my nginx1, which would be, this is port mapping. So from port 80 to 80 in a detached mode and just give the name of my image. So it's unable to find the image locally. It is going to pull from the registry and here, yes, it is able to create the image. Now let's check if it is actually there. So you can see nginx image, image ID created two weeks ago and then its size. Now the main part is Docker minification, Docker image minification. So I'll do Docker Slim build and image name. You can see another image nginx.slim has been created, which is latest and it is, it has a size of 12.2 MB. Now you can actually compare the difference between 12.2 and 142 MB. So that is the magic of Docker Slim. And yeah, that's pretty much it. And now we can actually go ahead and Ritunjay, could you please go to the next part and tell us more about what we're going to do and the intersection of Docker Slim and Kibarno. What is Docker Slim? We know what is Kibarno to some extent. Now let's talk about the problems. The problems that are the reason why this demo that this talk is happening. So as we mentioned a little bit earlier, what are containers? They are nothing but Linux processes. Good at isolation, but not isolated itself from the Linux kernel. Amazing. Now every process that happens, happens because of a series of system calls. And the Linux kernel has a lot of things to do like it. Whenever you execute something, you can, there are a bunch of things happening behind, whether it is reading from hard drive, bringing it to your resource, making it more resource-utilizable, making the network connections, killing the process after it's done. So your program actually does it with the help of system calls. And although there are multiple ways and multiple tools to know what has happened behind the scene, what system calls are being called, it's still difficult. And especially like if you have a cross architecture system, although usually except for the production use cases, it doesn't happen. But SysCalls are important. And they are very much architecture dependent. So if you have like a cross 86, 86 cross 64 architecture, it is very much possible that your SysCalls will be different from the one in the R. And that is why writing a Seccom profile, a security computing mode profile, which is the Linux kernel feature, which we are going to talk about next. But before we talk about Seccom, let's also talk about another interesting thing and that thing is capabilities. So the root user can do anything. It's said and it's great. But what happens is that sometimes we need to specify and specifically grant permissions to the services running, especially like if it's your container and you want your container to be filter out the actions that it can perform. So that is something that we can do with the help of capabilities. So even though Seccom exists, before Seccom, even before we come to Seccom, there is this feature of capabilities, but it does not offer more fine gain control. It offers you some controls, but it still has its limitations of granting you the specific permissions and not actually letting you control the SysCalls. How to control these SysCalls? And as of now, Linux kernel in the 5x series of Linux kernel, we have around more than 300 SysCalls for 86 cross 64 architecture. So knowing them all, writing them all and to know which one to block, which one to allow is a difficult task. But that is what Seccom was made for and how the attacks can happen. There are various ways where an attack can happen when you're dealing with containers. The most common are like some of the common attacks are the supply chain attack. Like you don't know the where the image is coming from and that can be a potential list. But at the same time, there are sometimes when the vulnerabilities can hide into your containers, you know that when you are going to use FFM, PG codec, you do not need to share, need access to read memory from another, from other parts of your system. And that can also be happened if you are not controlling your SysCalls. And these kind of attacks can potentially exploit our systems. And in order to control that, Linux kernel came with the feature of secure computing mode. And that allows us to filter what SysCalls we need to allow and we do not need to allow. But as we discussed ahead, how to automate this, how to make the developer life easy, that is where Dockerslam comes into picture. Now that you know that Dockerslam comes into picture, how to make sure that when these containers are being loaded, are being orchestrated as pods in Kubernetes cluster, how can we make sure that they are enforced there? That is where Kavanaugh comes into picture. So now we can see the coupling of Kavanaugh and Dockerslam happening. Let's move to the real part, let's move to the real demo or the final demo where we are actually going to integrate both the tools and see how they are helping us enforce and make our Kubernetes buckles more secure. MKDIR and create artifacts folder. So now it's an empty folder that we have created. Now let's do the Dockerslam magic first. Let's try that command first. So what command we are going to run? So, well, I always keep the commands handy. So I'll just have it here in front of you, all the future commands also there for you. But the magic is something I'll be going to show. So yeah, this is the command. So let me just clear the screen so that it becomes a little more clear for the audience. So yeah, so what is doing? Let's try to see this command, how this works. So the build command is something that Rekha already showed us. And this is the image that we are going to apply this on. But what is this extra thing that we are going to see here? So this is nothing but the artifacts folder that we have created. We are going to copy that. And Dockerslam has a built-in argument for that, for us to do. And then this artifacts folder will be having our sitcom. So let's build SMT, SMT act error. So this will actually be in, this is just like Kyber knows enforce and audit analogy. So this is something that is going to be enforced and that is going to block the, that is going to actually block any other scouts, which are not these, because this is a, this is an allow list. Okay. So this is an allow list. Now this could have been also log. This could have been logged, and this could have been just like an audit kind of action where it would have logged out the scouts that are, well, that, that, that are not allowed in this list. And this is another very important thing that I was mentioning earlier also architecture. It's very important to know the sitcom profile is work, is for which architecture. So this is for my, since my local host is on cross 8664. So we are going to use this. Okay. And now, as we know this, so why, why, why it is important? Why? Because like, suppose, just for the example, this is an exit group, this is called, right? So this has like an address in the memory, something like as 4024, just for the example sake. In arm, it might have a different pointer and that can create problems for it to identify which is called to actually implement. So that is why this having architecture is very important. So this, we have this, and these are the allowed scouts. Why we are having an allow list and since of a block list. So this can be actually also a CMP act block. And in that, what was as the name sounds, it can be like all those scouts, which have to be blocked, but Linux kernel is regularly being developed. And it may happen that if you use a block list, we are not able to capture a future system that comes in the future upgrades of the Linux kernel. And, and that, and that can be a vulnerability, right? So allow this will make sure that only these, which we know about are allowed. Nothing else happens. Other than that. So this is how a second profile looks like. Great. Now we have this, now that we have this, what is the next step? The next step will be to install, maybe cover, no, I guess. So just to remember that we are doing it right. Let's, let's check. So yeah, we are going to create the cluster first, the kind cluster, of course, and, and how we are going to create that we are going to see that also. That is, you know, a little tricky, not tricky. Actually, we are going to use our own configuration, why we are going to use our own configuration. Let's go ahead to see that first. So showing you how. So now, if you see here, this is, this is not like a normal kind create cluster that we are doing. We are using our own configuration because we need to mount this artifacts folder which contains our second profile into the kind image that will be running our cluster, right? So that is why we have this, which this is the host path. That is, this is my path where artifacts hold the second profile path was, and this is the container path. So this is the path which we'll have to see whether they're the second profiles that we have created with the help of documents that will exist or not creating our cluster with that configurations. And as soon as it gets ready, now the next step will be to see if that image is working fine or not. And once it is there, we are going to exit and see whether our second profiles are loaded there or not. So of course, the container ID will be different here. So let's just do that. So yeah, we have a Docker container here and this is a container ID. So we are going to use this container ID with this command, which is nothing but just checking out if we have our second profiles there or not. So yeah, I'm just going to exchange this. So yeah, today we have all, we have our second profile here. Other than we have it, we have everything with us. The next step will be to install Kyberno. So I'll be just using 1.7 Kyberno release, which I think was the latest stable release when I saw. I think so one point it's coming up. I'm not sure. But yeah, let's do that. Let's install Kyberno because that is going to do the next part. We have our second profile ready. We have our container ready, even if I had container ready. And we have pushed that container in my private, in my, in my registry, which I'll be going to use. So let's check it out first, whether we have our Kyberno up and running or not. So I'll be just doing to see whether, yes, we can see that Kyberno is running up and running now. So now that we have everything in place, I am going to try to do something that is, that should not be done. So let's see what would have happened if Kyberno was not with us. Okay, to know its importance, let's try to do something. What would have been happened if Kyberno was not with us. So what I'm going to do is I'm not going to apply the policy right now. I'm going to use the dirty resource right now itself. And why I'm calling it a dirty resource, let's try to find out by finding the difference between the two resources that we have. So if we see in this folder, in this folder, in the case folder, we have the policy, which we are going to discuss later. But before that, let's try to see what are these resources, how do these resources look like? So the dirty resource is this one, the unset resource. And why it is unset? Because it does not have the setcom profile set, while the set resource have the setcom profile set, making it, making our port security much more hard, much more hardened. So let's see how does the unset resource right now look like. So as we can see here, there is no mention of setcom profile, right? It's just a simple pod. And we are specifying which image to use. So this is the image that I have pushed. So this is the minified image of engines that we are going to use that was just built. I have pushed it earlier to save time. And this is the container port for which this will run off. Now that we have seen this, let's try to see also how the good resource will look. So this, as we can see, has a difference. So this has your security context and setcom profile and the local host profile, which is related to the container, of course, because this is inside container. So this is the path inside container that we have our setcom profile, the profile which we saw. Now that we have this, let's move ahead and try to see, try to apply this unset resource and see if it will get installed on Kubernetes or not, Kubernetes cluster. And of course, it will get up and running because there is nothing to block it. So just create a unset resource. Okay. I always forget to create the namespace. So let me just create the namespace first because this, this resource is being applied on a particular namespace. So I'm just creating this. So now this namespace is created, right? Now that we have a namespace created, just like we forgot our namespace creation, we can sometimes forget adding setcom profile to our resources. And that is what has happened in this case. So we have forgotten adding it. Now if we create this resource, it will be happily created. But this is again a potential vulnerability. This can be a risk and this should not have happened. And now that this board is up and running, if we see, this can be a victim of an attack. And what would have happened if we have applied given policy? Let's switch our gears back. Let's go back to carbon. No, let's apply its policy. Let's study its policy first and let's see what happens. So I'm just going to delete this part again because you don't deserve to be here. My part. So let's let's just delete this. So the part has been deleted. Now that this part has been deleted, let's try to see how does the policy looks like. So before we apply, we need to know how is the policy, right? So this is our policy. So this is a cluster wide policy because we don't want it in a particular names. We want it every bit. And this is the name of this policy, the strict set constrict. It's all the most tick by all the most tick because it's this we are make sure we are making sure and it is mentioned in the description that notations that it is not it should neither be explicitly set to unconfined. Neither it should be answered. Yeah. So on what kind of resources is going to be applied? It's going to be applied on a part. And what is the message and what are we going to perform? So earlier as Roika showed us, she was doing a mutate action. Here we are going to do validation. And this validation will be an enforced kind of validation because we'll be on the failure of the policy. What will happen that this will enforce it that this will try to block any kind of pods which are not compliant with this. So this is the message will be printed if we do not follow this policy. And this is the pattern that that is the way mostly second profile should be as they should be run time before the local post. So as we know that in a set resource, we have the local host in the answer to source. We do not have it. We do not have the security context and second profile there. So this is how the policy will look like. And this should block after applying our answer to source. So now let me apply this policy. So with this, our policy is created, it is applied. Now that we have this, now if we try to create the answer to source, it should be blocked. Let's see. So yeah, we have the success here. And this, this is something that is very important and a very crucial part of the demo. We can see that we had the second profile coming for Docker Slim. Okay, happy. But just like I forgot to create the namespace, somebody sometime can have that risk of forgetting to add the security context and second profile. And that can really be a potential headache for the people who are managing and managing your security. That is where carbon comes into picture. It makes sure it makes sure that you are as close to the, as close as you can get to the port security of your cluster. Now that we have this, let's also try the set resource with, which says, which is compliant to our policy. Will that be created or not? Let's do that. Of course, it follows the security. It follows the profile that we have created, policy that we have created. It has the second profile. And now if we see it in action, we can of course see if it is working or not. So just let me check if it is running or not. If it is running, our demo of the intersection of both Docker Slim and Governor, let's move to our slides again to see if you have any questions. And we are happy to answer your questions. And after that, of course, even after this conference, you can always connect with us. These are social media handles. I would love to connect with all of you. And thank you so much for everyone joining us. And hope you had great time learning with us. Thank you.