 Hello, and welcome to our talk. My name is Michael Lee. I work at IBM Research and my co-speaker today is Sasha Grunit from Red Hat. We'll be talking about our talk entitled Don't Trust Your Neighbors, Securing Pods via Scheduling. This work was done in collaboration with other folks at IBM, Salman Ahmed, Daniel Williams, and Honey John-June. So here's a diagram that shows a typical deployment of a Kubernetes cluster and you have your typical set of nodes. On each node one, there's a set of pods and inside the pods there can be one or multiple containers running. The containers on the pods all share a single, highly privileged kernel. And the containers typically deploy some applications that can have a very wide, diverse system called usage profile. And within the containers, there could also be some deployment of system called filters, like second filters, being used to enforce security for that pod. In a typical deployment, whether it's a multi-tenant cluster or a single tenant that have using namespacing for isolating different usage, users typically do not have very fine-grained control over who are the neighboring pods on that node. So some of these pods might have access to some system calls that can be quote-unquote risky. And these system calls, sorry, these pods using these risky system calls can exist on the same node with other pods. Yes, so what do we mean by risky system calls here, right? So in some sense, these risky system calls could be system calls that have been known to have some vulnerabilities in them that might be used for or known to be used in, you know, existing exploits before. Or just system calls that in general rarely use that potentially could be, you know, have potentially have bugs and vulnerabilities in them. So the fact that these pods have access to somebody's potentially risky system calls, we, you know, we dub that those pods as bad neighbors, bad in relative to the other pods of the system, who don't use these risky system calls and have to be, you know, next to these these these pods that do use these system calls. And because the fact that they all share a single privileged kernel, that means these containers that have access to these risky system calls can potentially exploit those vulnerabilities, do some kind of privileged escalation and take over the kernel. And if that's the case and break out of containment and now you have ability to jeopardize the other pods running on the same node. And these other pods are what we call some of these victim pods or and victim nodes, so to speak. So the question then is, yeah, given this problem, is there a way to improve the situation? Can we improve security by effectively reducing or avoiding the impact of these bad neighbors? So what we observe is that if you have a cluster and you have some, you know, pods that are quote unquote bad neighbors, you could potentially use scheduling if you know in advance something, some information about those pods and why they're bad. You can use that information while you before you do placement and effectively group them all together into one, you know, one node or set of nodes and isolate them from the rest of the system rather than the rest of the good pods. And this is effectively the underpinning kind of mechanism that we try to design and implement and evaluate and sort of thrust the main focus of this talk to discuss with you how we go about exactly doing this. OK, so in the remaining part of the talk, we will go deeper into what this kit is, how it works and the security benefits that can be had. So after that, Sasha will discuss in more detail how CISCAT is operationalized, mainly the security profile operator and how we use that to get access to these pertinent information about the pods in order to do the scheduling. After that, we will have a little demo showing how CISCAT is deployed in a Kubernetes cluster, mainly as part of the scheduling pod, as well as how SPO is deployed and how we leverage SPO to get the information needed for scheduling. After that, I will quickly discuss about some evaluation we've done showing the benefits for in terms of security, as well as any kind of performance impact that can be seen from this work. So underpinning CISCAT is this metric that we've defined called extraneous system calls, or XS. This metric effectively defines for a given pod the number of system calls that that pod does not need or does not use. But however, it is being exposed to by the fact that the other pods on that node is neighbors are using. So these are extraneous system calls with respect to that target pod, that pod in question. So this sort of captures for a given pod the amount of risk it has when it's being deployed on a particular node, given a set of pod that already existing on there. So as you can see from this diagram on the right here, it shows a node that has three pods, P1, P2 and P3. And these numbers on the bottom, 1, 2, 3, 4, 5, 6, 7, 8, 9 are just representative of system calls. So for a pod P1, you see the top figure there. It has system calls 1, 2, 3 and 5. And for this particular pod, the XS set of system calls that are extraneous to P1 would be 4, 7, 8 and 9. So these are the system calls that are used by other pods on that node that the pod P1 does not use. So the XS score here would be 4. And you can see from P2 and P3 that follows from that description. So at a high level, SysCAD, for any incoming pod, SysCAD calculates this XS score for each pod against each pod in every feasible node in the cluster. And then it uses this core to rank the feasible nodes and selects the node with the lowest XS score, meaning that node has the lowest risk for that particular pod that's being scheduled. So before we go any further, it's actually important to step back and take a look at what are the reasons why we think this particular scheme would work out. And it has to do with the fact that if you were to look at all the pods that are running workloads out there, if they all effectively use the same amount of system calls, you know, the ZAC set, or to use all the system calls that are available, then there's actually no room to really differentiate one pod's use of system calls from another. So the XS score, the extraneous system call would effectively be 0. And there would be no discrimination between placing a pod on one node or another. So we took a look at about 45 popular containerized applications from Docker Hub and we did a system call trace using dynamic tracing, running a representative workload on there. And we discovered that they overall, there's a small amount of system calls that are used by all of these applications. So typically, we found out that between nine percent of the system calls on a Linux system, which is roughly about three hundred thirty five system calls, about nine percent of those system calls were used by some application and it could go all the way up to about 37 percent of those three hundred thirty five system calls were used. So as you can see, there's a sort of a large variation of number of system calls that are being used by actual applications. Certainly, it's not the case that all the system calls are being used or even a very large fraction of being used. So this range gives us hope that there is room for scheduling and using this XS score and be able to use that score to discriminate between placing a pod on one node versus another. Another indication where this CISCAT approach would work well is that there's also a very large variation in the similar system calls that are being used across the different applications. So we found out that there's a little less twenty three percent that are similar between any two application and as large, not nine percent that are similar. So this tells us that there is a large room to play here being allowing us to actually use XS score to do the discrimination and do the ranking of the placement decision for each of the incoming pod for a given set of nodes in the cluster. So this slide presents a scheduling example to show in detail how CISCAT scheduling algorithm actually works. So this is an example of a cluster of just two nodes. So in column A, you see the nodes are initially empty and part one P one is an incoming pod to the cluster and it has a system called one, two, three and five. CISCAT calculates the XS score, which in this case is zero for both nodes since there are no other pods, no other neighbors potentially. So I know no other neighbors on those on those nodes. So CISCAT will have to randomly pick a node to place P one. So it picks node one to place P one, as you see in column B. So in this column, there's also a second pod P two that's coming in and CISCAT calculates the XS score for P two with respect to different nodes on the system. So in this case, CISCAT calculates XS score of seven for node one and the XS score of zero for node two, since there's no no pods running on node two as of yet. So given these two scores, CISCAT will have to pick the smaller XS score to place P two, which is in this case, node two. So in column C, you can see the placement of P one and P two with respect to two nodes and then there's an incoming pod P three. CISCAT again calculates XS score for P three, given the existing pods on those candidate nodes. In this case, for node one, XS score is two and node two, XS score is seven and picking the lowest XS score for node one, CISCAT places P three on node one. So this is effectively how overall CISCAT performs scheduling and placement decision. So CISCAT also supports the use of weights for different system calls. So the weights can denote or indicate the riskiness of certain system calls or the group of system calls. And weights gets added to the XS score, you know, depending on the system calls that the weights are associated with. And it can make the presence of a pod using such risky system calls be less attractive as a neighbor. So for instance, if you have some system calls, let's say that are rarely used, you can add a weight to that system call. And if there's a pod that happened to use that system call, that pod will effectively be shunned by all the incoming pods that are into that cluster. So the weights can have different values. It can be negative one, meaning that that particular system call, if it's found, used by any pod on that node, it would be not considered an extraneous. Zero will indicate that it is extraneous, but there's no special consideration but other than the fact that it is extraneous. And if any number graded in zero would mean that it's extraneous, but it's also a large number, it can be a large number, and that number, the magnitude of that number indicates effectively the amount of risk that particular system call has to the system. So, you know, the administrator can transform the notion of risk into weight values. Some examples of where risk can come from. It could be derived from, for example, from a CVSS core associated with some CVs that are known to impact the kernel and maybe have some associated system calls with those CVs, including other techniques that have looked at the type of system calls that are used in the wild and in exploits and look at how the system calls are being used and whether the system calls are unique to those exploits. And given that information, that analysis, you can then transform those risks to some weights that then can be applied to CISCAD. So now that we understand what CISCAD in general does and some of its basic mechanism, it's a good time to kind of step back a little bit and talk about what CISCAD does and does not do. So to be clear, this solution that we are talking about today does not prevent the use of risky system calls or system calls that are potentially vulnerable from being executed in a pod. So it doesn't prevent that at all. And it definitely does not prevent the attack leveraging system calls, such system calls from being used and exploited. So an attack can still happen. It doesn't prevent an attack. However, what it does do, hopefully you can see from the example in the previous slide is that it tries to co-locate, group together, potentially harmful pods that use these system calls, centralize them in a way and remove potentially system, sorry, pods that don't make use of those system calls and try and isolate them away from these potentially riskier pods. In this way, what it does do is, on average, what it reduces the effectively the blast radius of an attack. So an attack where to happen and succeed, it limits the number of nodes that the victim nodes that will occur. You know, comparing to some kind of default scheduling where that those malicious pod could be spread out evenly across the nodes in the cluster, it instead cluster them together in some central area. And in addition to that, it reduces the number of victim pods that the attack or the attacker might have have access to. So those are the main things that SysCAD does not do and the things that it does do. In addition to that, in the way that because we use system calls as a way to do scheduling, the idea is that if a developer who wants to develop some software for a container, this person could be more cognizant of what system calls that you use and not use and be careful about the number of system calls that could be associated with certain libraries he or she uses in order to reduce the amount of system calls that are necessary and be a good citizen effectively in this ecosystem. And by doing so, we hope to have a way to incentivize the usage of safe system calls and the number of limited system calls that are necessary in a pod in this way. We could hopefully have a much more secure and more cognizant approach to developing software in the cloud. So this slide here shows the overview of how SysCAD is designed and how it fits into the overall Kubernetes scheduling framework. So in pink here, it shows the SysCAD component. So SysCAD can effectively be run as part of the main scheduler, as part of a single scheduling system, or if you run as an independent secondary scheduler in his own pod, depending how you can figure your scheduler. And for this diagram, you can see that when pod comes in, the scheduler works by first applying a set of filters to this pod. And if the set of filters can be, for example, one of them can be identified, all the nodes that have the appropriate resources to run that pod. Once this feasible set of nodes is determined, it gets passed to a set of algorithms to rank the different feasible nodes into the best one. So effectively, these algorithms are the scoring plugins and SysCAD is one of these plugins that gets applied to rank the feasible nodes. And SysCAD, yeah, so you can configure the scheduler to run a set of these scoring algorithms or a set of these filters, or you can select, you know, from one to many different combinations as you want. You can run SysCAD also by itself to just primarily use the SysCAD ranking algorithm by itself and independent of anything else, or you can combine it with other mechanisms that are already there in the system, depending how you want to configure your scheduler. So SysCAD in general is, you know, pretty easy to use and deploy. However, one of the main challenge of using something like SysCAD is its reliance on being able to generate the system called profile for the pod. Even though this is an orthogonal problem to our scheduling mechanism that we discussed, we nevertheless use system called profiles to kind of inform our decision of how to do scheduling. So getting this profiles is effectively essentially very important for us and how to do so easily can facilitate using this mechanism of scheduling that we propose. So, you know, there are several ways to generate the system called profile for a pod. You can go from a static analysis way. There's a lot of good work around that. You can also go, oh, you also can have assumed that the system called profile is supplied to you by the developer. You know, giving you the application of the pod and here's a set of system calls that are generated for you. You know, that's probably less likely that that will happen. So in most cases in industry, the dynamic approach is often used where the pod is run and then run for some time while the pod is being exercised and the system called set that's being executed is recorded and that becomes your system called profile. So fortunately for us, there's a great open source project called Security Profile Operator that goes about automating and managing a pod system called profile and its second profile on a pod's behalf. And we leverage that very greatly and we have a very close integration with that mechanism in order to allow us to use this in a very comfortable and easy way. So we'll discuss that next. And Sasha, my co-speaker, is going to introduce the security profile operator, SPO, please take it away, Sasha. Thank you, Michael. So the security profiles operator is the Kubernetes operator for managing security profiles as custom resources. So it supports Seccom profiles, AppArmor, as well as SELinux. Its main purpose was to synchronize profiles across worker nodes automatically so that the user don't have to take care of that manually, which is the case if they don't use something like the security profiles operator. This means that the user usually just deploys the operator, which is a manager. And this manager takes care of updating and creating a demon set which runs on every node or on every configured and selected node. And this demon set uses an internet demon which watches and updates. And is also capable of creating those Seccom profile, SELinux profile, and AppArmor profile's custom resources. Users can now directly interact with those profiles, like modifying them, creating new ones, or also deleting them. And the demon will take care of watching all parts which actually use those profiles, install them on the node before they actually can use them. And if the usage of the profile is gone, then the demon will also take care of cleaning up those profiles and removing them. The whole layout of the profiles on this is specified by the security profiles operator. And the custom resource will help users to actually use them in an easier way and also to split responsibilities between system administrators and cluster users, which are just the consumers of those profiles. And this is how the security profiles operator provides those custom resources as an example of Seccom profiles. So usually we can define Seccom profiles as an allow list for this course, which means that we just have to provide a kind Seccom profile. Then we just give it a name or I provide additional metadata around it. And the default action is Seccom action error node, which means that this call gets blocked. There are a couple of other actions available, for example, just allowing this course or locking them to audit log or also tracing them or killing only the thread and things like that. But error node is one of the most common ones. We also have to define the architectures for which our Seccom profile should apply. So different kernel architectures can also provide different environments where different syscalls are available or not. This also depends on the kernel version, for example. And then we specify a list of syscalls, which are actually allowed in our container. In this case, we provide a base profile for run C, which allows a couple of syscalls and not all of them are displayed here. But for example, we cap, get, cap, set and change here are some of these calls which are required by the OCR runtime run C to actually start a container. But security profiles operator has a way more extended feature set other than just providing those custom resources and sending them on this. For example, we have different configuration options in the security profile operator, we are able to bind profiles to actual workloads. We can also stack profiles as we already saw in the run C base profile. So just can be stacked on top of a different second profile so that you just combine the allow list. We also have a lot of metrics and a few experimental features, for example, like enhancing edge cases and stuff like that. One of the main features is the profile recording, which can be done by using all the log pausing or eBPF. And if we now look at the architecture of the security profiles operator, when we speak about eBPF based recording, then things get a little bit more complex, right? So we don't have just the miniature and the demon set. We also have a web hook which adds annotations to parts and the demon will watch those annotations on the parts and will then tell the eBPF recorder to record this call. So starting and stopping recorder will be done by the demon and a dedicated eBPF recorder instance on the node. The eBPF recorder then records the profiles. And if it's done, then the demon tells to stop the recording again. And then the demon will take care of installing the profile. So how will an end user actually use the feature? End users have to use a different custom resource for that. So the profile recording custom resource can be created by end users. And then the demon will also watch those resources and will act based on that. Let me give you a demo about the profile recording and the security profiles operator. First of all, we have to double check if everything is up and running. So the security profiles operator relies on sort manager, for example, if we deploy it on playing Kubernetes and then we can see that the SPO is running and the web hook is running as well. And we also have our demon, which is called SPOD. And this is also running and configure that eBPF recorder is enabled to actually be able to record something we would like to label our namespace, which is the default namespace of enable recording. Otherwise, the security profiles operator will not do anything at all on this namespace. And if we now look at our example recording, then we can see that is we want to record a second profile is using the eBPF recorder and that we have a pod selector which matches App Engine X. So we now can apply this recording to our cluster. And then we can double check if the recording is available. So we can get profile recordings and then we see OK, our recording with the match label app equals engine X is available in the cluster. If we now would like to run an example pod, which can be an engine X server, for example, then we can see, OK, this is a usual pod, which has the label app equals engine X. And if we apply this pod and if it's up and running, then we can just query the engine X URL to see if it's working. Yeah, we have our main landing page for engine X. And if we then delete the pod again, which should now produce some syscalls, then we can double check if we have a second profile available in our cluster. All right, so we have the recording container on second profile, which got installed seven seconds ago, and it's available in this local host profile. And if we now look into the second profile, then we can see that it contains all the syscalls required to run the engine X profile. And for example, we have to bind an address. We have to allow connections to be coming in. And this profile is now ready to be used from our pod. This means if we now change the pod manifest to actually contain the local host profile about our current recording and then apply this pod. Then the pod should be up and running again and use the second profile now. And if we now use code again, then we run our pod in a more secure environment, but it still works as intended. So there are a bunch of interesting insights when you speak about the EPPF-based second profile recording in the security profiles operator. We use, for example, Co-V, which means compile once run everywhere to achieve maximum portability of code. We also use the mountain names base for tracking processes because it can be possible that we have multiple pits inside of a single container. We also did some reliability and performance optimizations to be able to record multiple containers in parallel and also, for example, to cache the process and mountain names base relation and also the container lookup, which will be done by Z-Crew spares. We also support now merging recordings by using replicas. So if you use a deployment and then record the deployment and then use, for example, a service to actually access the pod, then the security profiles operator is also capable of merging the recordings together into one. So all in all, EPPF is one of the most reliable ways to record this course, but it comes with a bunch of downsides. So there's a security concern because EPPF programs run kernel scope, so they see all processes running on a single node. And those CIS calls can also differ across different kernel versions and architectures. So on some architectures, some CIS calls are not available at all. And also some kernel versions don't support all CIS calls. So there can be some backup paths when recording those profiles. This means that depending on how your configuration in your Kubernetes system is set up and also OCI run times use different CIS calls to set up the containers. So in the end, when you record profiles, you should ensure that you have an as close as possible test environment to the production environment. Now a bunch of great recent enhancements to the security profiles operator. For example, we were working hard on productizing the SPO. And for example, we have home chart deployments or usually customized based deployment and we also support operator hub. So this means that we now ship it in OpenShift as well. And we are also working on optimizing the SPO for edge scenarios. So we provide now a command line binary which works even without Kubernetes at all and which supports recordings and replaying second profiles as well as storing those profiles as OCI artifacts and registries. We were also evaluating extended second profiles so we can categorize CIS calls if they're risky or slow in different categories. And then we group them together and we make second profiles a higher level abstraction. If you want to learn more about the security profiles operator, just scan the QR code or reach out to us. We are slack. OK, great. Thank you, Sasha, for that. And now to show the end to end picture of how SPO is integrated into CIS CAD. This slide shows the first step of the first order of business is to collect the system called profiles from a pod and you can see here from this diagram. The pod can be run in a dedicated cluster, you know, specializing in running these pods that have not been seen in the class before, maybe a subset of nodes inside an existing cluster. The idea is it's just run the pod application for a while and collect a system call information and then upload that to the SPO. And once that is done, the SPO, these CRDs that belong to the SPO can be bound to the pods so that when the pods are created, they can have access to these CRDs in the SPO. And once that is done, CIS CAD can then now be used as a way to graduate these pods that are run beforehand in these dedicated isolated cluster into a more hardened and more trusted area of the cluster for production, perhaps. And CIS CAD can then use the pod CRD information from SPO to then do its placement. So we have a little demo here to kind of demonstrate and show how the scheduler plugin is deployed, as well as how to deploy the SPO and create the system called profile CRDs and bind those CRDs to the pod so that when the pods are created, it will be able to access the CRDs that the SPO provides containing the system called profiles. In addition to that, we'll also show how the pods are deployed onto the default scheduler and using the CIS CAD scheduler and be able to show the placement differences between the two given the set of pods that we use in the associated system called profiles. So towards that end, the slide here captures the scenario that we will be showing off in the demo. There are two identical scenarios, just one set of pods being deployed by the default scheduler and the other set of the same set of pods deployed using the CIS CAD. And a set of pods here contains two Redis servers, Pro FTP, MM Cache D, MongoDB and Nginx. The two pods in Red Pro FTP D1 and MongoDB one, these are marked in Red because in this scenario, they don't have a second profile, so they are unconfined, meaning they have access, potential access to all system calls available in the system. And then in our sort of application with respect to CIS CAD's point of view, this is sort of the worst neighbors you can be next to because they have effective access to all system calls that potentially can be dangerous or have vulnerabilities in them. And so you will see that in the top scenario, using the default scheduler, these two pods will be placed randomly in this example instance, as we create for you, they will be placed in a 1-0-2, whereas in the second demonstration where we're using CIS CAD, if your CIS CAD looks at the different system called profiles being used, you see that they will end up clustering together onto a single node running on a single node so that any kind of fallout from an attack will be contained to that node rather than spread across different nodes as in the first top example. OK, so now I'm going to run the demo. And in this demo, we will first build the local system called CIS CAD plugin that we will then deploy as a secondary scheduler. So you see it just being built, that's just a common process. Once it's being built, we push it into our local repository. And after that, what we do is we have to do a little modification in the manifest of the scheduler, make sure it points to our image, as well as both the controller as the scheduler, and then add the CIS CAD as part of the configuration file, the plugin that's being used. And then we effectively use the HelmChart to deploy it. Pretty simple. You can see there both the scheduled plugins controller as well as the component, the plugin component is being run right now. OK, so the next step here is to install the security profile operator, the SPO. This is also a straightforward manner. First deploying a cert manager to get all the access control correctly defined. And then we effectively just use the script available in SPO's repo to deploy the different components. Yeah, so see if one of the pods that's being deployed, you see there's a few pods there, security operator, security profile operators there that are running, and so now you have security profiles running as part of the cluster. All right, so the next step here is to install the different system call and create and install the different system called profiles, the CRDs are used by SPO. So in this step, we assume that the recording of the system calls have been done and they are already recorded and they are in this file, these different files you see for Nginx and Redis as well as for MCashD. And these example that we're showing here, and then we're just applying these YAML files and after they're applied, the CRDs are effectively created in SPO and you can see them once we list them here. So now we create profile bindings, binding the SPO CRDs to any pods that will get created. Yep, so you can see here all the profiles have been bounded. All right, so next thing now is to deploy the six pods that I mentioned earlier and deploy them using the default scheduler. And you see at the end of deployment, it's a layout, it's configuration. So I mentioned before these three, there's two red pods, both Mongo and ProFTP. They're in two different nodes, right, no three, no two. They're spread out, even though they are the most risky, you know, most risky system called profiles given their nature of accessing all system calls because of being being unconfined. And the next case here, we deployed it the same set of pods you can just get. And you can see the layout that will be shown here, that these two pods MongoDB and ProFTP one are located in node two. So they're kind of co-located together and contained. OK, that's the end of our demo. So now you've seen the demo. I want to just briefly talk about the evaluation that we've done in terms of looking at the security benefits as well as some performance, potential performance drawbacks and overheads. This is only a brief overview, giving you some notion of the benefits you can have and potentially some overhead that you should consider. A lot of the details I can discuss later when I reference the paper that we written on this topic. So you can find more details there. So in order to run this evaluation, we selected 42 unique applications that are known to have a high volume of downloads from Docker Hub. These applications span the faculty 11 different classes, application types, you know, from FTP servers, the web servers to databases to various different types of applications. The cluster, we have a set of a cluster that can grow up to 42 nodes and can have 126 pods or useless scheduling. So each of the 42 application can have up to three instances that it runs on this cluster. So, you know, there are two questions we want to answer. First, you know, what are the security benefits and if there aren't any. So with that regards, we looked at two questions. First, is there a reduction in the overall access number by the score? Remember, Essex measures the number extraneous system calls are seen by a particular pod that's being scheduled on the system. And we compared this to a default vanilla Kubernetes scheduler where it just does the scheduling based on placing the pods on the next available feasible node. And we shown that it can there can be actually up to two times better. And once this get is used in reducing this excess core. Oh, so that's great news. There's a lot of room for opportunities for improvements there in terms of isolating, limiting the number of extraneous, you know, potentially bad neighbors you have. The second thing we looked at was that, you know, lowering the excess core is great, but does that actually translate to actually any concrete security benefit, you know, this reduction of blast radius that I mentioned earlier? So what we found is that when we looked at, you know, we assume certain system calls have associated CVs with them by looking at the historical set of CVs that are out there over the past about, I think, a decade. We have shown that with Cisco compared to the default scheduler, we can reduce up to 46 percent and 48 percent, respectively, of the number of victim nodes and pods for a set of these 50 CVs. So, you know, we've shown that, you know, with very minimal amount of work you have to do, you can actually get some pretty decent security benefits by leveraging information that is already there known to a running pod. So that's great. Now, looking at the performance, you know, one critical aspect of this work is that, you know, you can observe that, you know, one pods are similar characteristics, they tend to be scheduled together and that could translate to potentially having similar application types, let's say like database or web servers get scheduled onto the same node. And does this present a kind of performance overhead performance risk? And the answer is, you know, maybe not too surprisingly that it really depends in some situation it can be if that node is, you know, constrained or the resource that's being shared by these different pods that use of the same type of application. So in that case, for sure, there's going to be some penalty. However, if the node has room for these applications to run on, even if they share the same resources, the performance impact is actually negligible compared to a similar distribution of these pods on different set of nodes, rather running the same node. So it's all a matter of the sizing of the resources that are available on that node to running the similar application. So it's not always the case that, you know, by, you know, by default, the fact that you run these applications that are similar application on the same node, you get worse performance. It really depends on the nature of the resources constraint on that particular node. So the current status of our work is that CISCED right now is we're open sourcing CISCED and we've got CISCED open sourcing into the Kubernetes SIG schedule of plugins community. And we've got the KEP accepted already and we're working on finishing up the PR. We've already sent in the PR, but we're working with the community trying to get it accepted, modifying the code is necessary to do so. Some to-dos that are going on and a couple of interesting things. One is handling CISCED being able to handle different types of second policies, like besides allow, there could be some disallows or could have some kind of just purely log or having no second profiles at all. So we have ways to handle some of those, but not clearly yet and not thoroughly. So we work through to kind of manage some of that. We also want to consider by extending some of the SPO mechanism and somehow some of the information in the CRDs to encode some risk information in system called profiles. So whether system calls can be risky or some weights, that information can be not only used for us, but potentially others who make use of this information to do some other type of scheduling or to other type of security products. In addition to that, the last thing is that we want to account for pod information that are the runtime information.