 Hopefully, you saw last week's episode, last episode, and got StackRock software up and running. I hope you get anything to the open source project, deploying containers, we'll talk a little bit about what you should do afterwards or what I think you've all been running. Just some basic things you should think about the product if you're using it. And then we're actually going to dig into some of the use cases like vulnerability management, some of the deployment checking. So we're going to play with the command line. I'm going to show you how to use the API tokens and other things to kind of get through some of the usage. The only view to decide on your own, how to integrate that with the CI-CC pipeline. You can share some resources by using the ACS product and it's a command line tool and any of the CI-CC tools that you might be using. We have examples that include things like GitLab and GitHub and Azure DevOps. If you're building things in a pipeline or even if you're doing it on a workstation, you're building container images and deploying them to Kubernetes. That's a great place where you can get the ROC CTL command line running to do a security check before you get there. So I'm going to share my screen. I have an ACS environment or a StackRock environment up and running. We're going to talk a little bit about the initial installation and when you have it up and running, what you should take a look at first. Hopefully, you're at this point where you've got our running ACS environment. In my environment, it's really simple. I've got one cluster installed. It's got three worker nodes on it. So I have three instances or six instances of the collector because I've got, I actually have four worker nodes and three master nodes in this environment. This is an open shift cluster. In your environment, you may have more or less. What you should expect to see, of course, is that everything is healthy because you're running all the appropriate deployments and they're all communicating with Central. In my case here, the collector pods expected seven because I've got seven nodes that I'm running on here. Now, this deployment was done with a couple of options turned on. We'll talk more about that when we get to it. Your behavior here may differ depending on what your settings are. In particular, things like whether or not enforcement is going to work is going to be here in the dynamic configuration. Some of this stuff is going to be after the fact, after you install StackRock, you're going to want to do some of this configuration. The reason that it's not turned on by default is that turning on enforcement and starting to enforce security policies with ACS or with StackRock is disruptive potentially. We want to be careful about that. You're putting in an emission controller. There's a webhook that gets registered. You'll see that it's going to look for things like create and update events. In particular, there's a timeout here that potentially impacts all of your deployment creation in your cluster. Just be a little bit careful. If you're running in a lab environment, a test environment, you're willing to break things, all good. I'm going to go through some examples with policies. I'm going to try to scope them. Scoping is another good way to control your experiments a little bit. You're not bringing down a production environment by enabling a policy that does enforcement. A couple of things that you should think about. When you get an installation up and running, you're going to start to see data right away. There's a ton of policies that are preconfigured. You're going to see inputs like vulnerability management start to populate data in here. That's because the software will go and pull images from public sources like Docker Hub. If the image is pullable anonymously, we don't need credentials, then it's going to pull those down. You're going to have results like this LogStash version that I pulled specifically to show you that log4j vulnerability from December. If you're pulling publicly available images, then this should work out of the box. If you're going to try to pull from private repos, that might be something that you'd have to configure. If you're looking to pull container images from a private Docker repo or some internal registry, you're going to want to go and check through to look to see if StackRox has already picked that up or if you have to configure one, even if StackRox has picked one up. It will do so by looking through your Kubernetes cluster, looking at secrets that are being used for pulling images from various repos. We'll try to use those. They may not be correct. The auto-generated ones here might not be what you want, so you might want to supply a robot account or other credentials to be able to get all of them. In particular, if you're going through your vulnerability management and you look at the list of images and you see images that have no results, you might have to go through a couple of pages here. You may see some images that have just zero CVEs listed. One of the things you might want to do is check to see if you've got an integration with that registry and if the credentials are correct. This comes up a lot in an Amazon environment, for example, where you may have multiple Amazon accounts, AWS accounts that have their own ECR registry, and each one of those is going to need an integration or you're going to need to manage how the cross-account sharing or the IAM credentials will work. You can use IAM credentials or you can use container roles. You can also assume roles now in ACS, so if you're running in an AWS environment, there should be a method of accessing ECR in different accounts that will work for you in your environment. The assume role approach just means that you've got to have a role established that can do the cross-account sharing for ECR. Essentially, what we need to do is we have to get the StackRock software, the credentials, to be able to pull images, which in AWS terms means permissions like being able to list and get all of the layers and the metadata from that. Without that, you're going to find some gaps in your coverage. You'll find that some of the images don't have those results in it, and so that's one of the things to look at. You'll see that there's quite a few different things here that have come pre-configured, public registries like Quay and Docker Hub. You may have others that you want to use and just adding them in here will give ACS that access. Some of the things that you'll probably want to do would be to set up a notifier. This is going to send security results to some destinations. It's really flexible. It's a great way to get a notification about a runtime event that's detected. It can send email to a team or Slack to a team based on a webhook. So there's some really clever stuff in here. Check out the documentation. But being able to send the team that owns a given deployment by using an annotation with a Slack webhook in it is pretty clever. Note that once I configure an integration like Slack or even something like Splunk or just a generic syslog or webhook, once you've created and named it, you have to edit to policies. We'll look at that in a second. Another thing that many people like to do would be to set up some access control other than the default admin account. If you follow the installation instructions and you're like me, you're lazy, you're still running as the admin user with the admin role, that's the only role that you can have to that built-in account. So you're almost going to certainly want to set up an external auth provider. So anything that supports OpenID Connect or SAML 2.0, you can actually use Google Identity through if you have a GKE account. Most providers will support one of these standards out there. One of the StackRocks folks out there wrote an integration example for key cloak, for example, but almost any ID provider will allow you to sign users in here, assign a minimum role, assign custom roles and scopes and others. You can give people access to the UI here without giving them full on admin rights. But like I said, a lot of the stuff here is pretty much preconfigured with at least with our opinion of what good security looks like. So you're going to start to see all these violations. Don't think of it as something that you have to investigate every single issue or that you have to go down and check off all these boxes and mark them as resolved. ACS is going to find misconfigurations and vulnerabilities in components that come from Kubernetes, from the cloud providers, from any of the third-party software that you're using, as well as your own workloads. It's going to look at everything. But it's not something that necessarily has to be fixed in every single case, but it is a good place to get started. One of the, well, the driver for all of these violations here is going to be in the policies engine which comes preconfigured. So let's take a look at policies because this is the heart of the product. And this will get you an idea aside from poking around in the user interface and clicking on reports and looking at details, when we want to start doing something with Stack Rocks and start putting policies to get teams to resolve their container misconfigurations. Policies is where we go. This is the heart of the product. And you'll see there's a whole bunch of them here preconfigured. These all come from us. You can write your own. Of course, there's a create policy button up here. Policy is more or less look at the criteria that Stack Rocks measures from images, from deployments, from runtime, what's going on in the network, what's going on inside of your containers, and then use that criteria to take some action. Now, a lot of these policies come from tried and true security use cases. Keep your images up to date. Keep your images free of extraneous surface area, stuff like managers. They're useful utilities, but in a container image, especially in your app that's running in production, these things just create something for an attacker to use. Of course, we often start with vulnerabilities. Serious vulnerabilities like the old Apache struts or the much more recent log for shell vulnerability are really good places to start looking. There's also some generic vulnerability management policies that are built in. This one calls out specifically this 4.4.2.2.8 vulnerability from December, but this is in addition to it just calls out separately that particular vulnerability from the default ones where we would find this. As long as a vulnerability has a fix published out there, meaning it's fixable, and it's serious, it means it has an important or critical ranking or it's a CVSS greater than 7, you're going to find this policy will trigger on that. Let's start with this one. Here's an important tip. Try not to modify the default policies. You can modify them. There are some limitations in doing that. One of the limitations would be that it gets overwritten and changed in a future release. You're better off cloning a policy if you want to use it, especially if you're going to do things like enforcement. Rather than try to just modify this one, I'm going to clone this one. Here I have this fixable severity at least important, which is a little bit tough to parse. It means that I'm looking for vulnerabilities that are either important or critical. Those are the two severity rankings. That also could mean that CVSS greater than 7. There's a fix published upstream. If this were a vulnerability in something like a red hat supplied component or an Ubuntu package, it means that upstream that those providers have a fix available for this. Going into my cloned policy, I'm going to leave all these things as a default. A policy encompasses the, what do I want to do with this? It also encompasses the how should I react. In this case, you'll see that at the build and deploy life cycle, I'm going to set this to inform and enforce. What does this mean? It means that at the two stages when I can ask Stack Rocks, what do you think about this image? I can do that early on in the build consciously, or I can do it sort of in line with my deployment using the admission controller. We get two shots at this. Ideally, you will always do it here in the build. That teams will actually look at the results. We'll take a look at this in a second. They're going to see what this looks like, and they're going to honor your request that they fix these vulnerabilities. So the build enforcement is a way to fail the command line tool, basically, to return a non-zero error code, and to tell the user that, hey, there's some issues that you have to fix here. These vulnerabilities are out there, they're serious, and there's a fix published if you just go out and take advantage of that. So we'll take a look at this build enforcement. I'll leave the deploy enforcement off for now, but this is belt and suspenders. This is saying we want to protect my Kubernetes cluster directly, that if somebody tries to deploy something, either not having gone through my build pipeline, or let's say an attacker gets hold of your an OpenCube API endpoint and credentials, then they're going to send commands directly to the cluster. We want to protect that using the admission controller here. So let's enforce this one. I'm not going to change the criteria, but there's loads of stuff here in the policies. What's nice about this is one policy engine, you get one place to create these rules, and they can actually take into account a lot of different criteria. Here, we're looking just at stuff that's in the image. It's just image contents, it's just whether or not the vulnerabilities are out there are fixable and are serious. But I can also combine this with other data, like what namespace is this in? Is it exposed on the network? These are all really great for honing in on a condition that you specifically want to find. They're also really good for getting started. I'd love everybody in my environment to fix every vulnerability that has a fix available, but the reality of that is probably not. So what I can do here with policy is scope them in a way that makes it a little bit more digestible for my teams to be able to go in and start working on this. Maybe I start with a rule that says, hey, if you are building an exposed network surface, that you have something that's exposed, let's say using a load balancer, that's publicly available in my environment. I absolutely want you to fix that. That's a cool thing that I can do here. I can write a rule that says, anybody trying to deploy an image in this way with exposure gets a little bit more scrutiny. I'm not going to do that right now because I want to show you what this looks like. Because this is a test environment for me in a lab environment and I'm totally going to burn it to the ground after this, I'm just going to leave the scoping wide open and then save my policy. Now I'm getting a little bit of a heads up over here that there's a lot of stuff that's impacted by this right now. By saving this, I'm not going to shuttle this stuff off right away, but this should be a good clue to indicate that on the right hand side here that you've got a lot of stuff that violates your new policy. Maybe that's not the net that you want to be casting here for this. Either way, I'm going to save this. Now my policy is in the system here and it's set to enforcement. Let's go violate this policy. I'm going to switch over and show you from the command line in a mock way of showing you what a build looks like. I'm going to show you this. What I've got is I have the rock CTL command line, which you should too. You can download this from the UI of the product just in the upper right hand, get the appropriate version for your cluster. The capabilities here are pretty important. There's quite a few of them. They're tied into our back. When you download this, you get the rock CTL. It's fully capable of doing all of this stuff. It can be used to install or modify the software. You can use it for things like backup and restore, debug modes. I'd urge you to take a look at some of the options in here. Under central, you're going to see things like being able to generate files. There's stuff in here for managing certificates. That's another topic that post install you might want to look into would be to get your own custom certificates in there, but we'll save that for another video. What I'm going to use this for is the rock CTL image command, and also the rock CTL deployment command. These are commands that I'm going to use to check some subject matter like an image, give an image name to basically ask Stack Rocks, what do you think about this image? Does it violate any of my policies? The capabilities that I have here as a command line are going to depend on my API token. Down here, I can provide a token file. You can also export this. If you export the Variable Rocks API token with the appropriate encoded token here, you can run these things. Who knows if that's even the right one, but let's check. What I'm going to do here is run a simple image check. I'm going to look at one of the Red Hat supplied images out there. You can see that the only problem with it is that it has this root user. This is a low priority, but pretty typical. That's probably because I probably shouldn't use this image as is. This is the UBI8 micro. It's a base image, universal base image here that I would use to build my apps on top of. The cool thing here is only one violation, it's pretty good. You can see that the build did not fail. That means my enforcement rule wasn't hit. You can actually verify that by looking at the return code from Rock CTL. More or less, I've asked Central to evaluate this image. I'm skimming my self-signed insert validation here right now. What it's not doing, by the way, it's not scanning the image locally. I think that's a common question, common misperception. Some vulnerability scanners out there will take an image that you have on disk locally on your workstation and we'll scan that for security concerns, but not ACS. ACS is pulling this in this case from Docker Hub. The command line here is just being used to connect to Central to get that feedback. Now let's take a look at something else. I'm going to pull from again from Docker Hub. I'm going to pull this log stash because I know specifically it has the log4j vulnerability in it. This one's got quite a bit of a different output. Now, I ran this before, so I'm going to take a couple of seconds to go pull the results from Central because Stack Rocks is caching the date. Yours may take a little bit longer. If it's not locally known about yet, you're going to see that it'll take a few more seconds for Central to go and retrieve that through the scanner, get it scanned, all that jazz. But the result here is pretty ugly. I got a lot of stuff here and you can see that I've got a bunch of goals. There's the log4j called out, but I did have a build break here, both the default policy as well as Chris's policy here. You can see that again in the command line that a non-zero exit code means this failed. In most pipelines that you run, that's going to return an error. This output is going to show up. In your Jenkins and your GitHub action, wherever you're running this, that's going to fail. The idea here is to drag your teams towards the right solution to this, which is go out and grab these fixes, and they can run this anytime. Now, before I go back to the UI and we talk about deployment, I want to show you another option, which is not image check, but image scan. Image check defaults to human readable output. It's really useful for human being to look at the results, but an image scan is going to do two things. One, of course, you're going to see the JSON output by default. There are some other output formats, but another thing is that image scan is more thorough. It doesn't just ask StackRocks for a verdict on policies. It also breaks down all the layers into the components, all the components into the CVEs. It doesn't care if the CVEs are fixable or not. You can see that there's a great deal more detail. If you're going to post-process this, if you're going to use this in other systems, for example, and you want to get that very detailed breakdown of everything, the image scan is going to be your friend. All right, let's go back to the ACS dashboard. Actually, maybe I should take a moment to see if there's any questions from anybody out there. I love to talk about this stuff, so I'm going to talk about this stuff all day if you let me. There's a question about integrating OPA gatekeeper policies into ACS or in a StackRocks. The answer is, unfortunately, today no. Possibility of that in the future, so keep an eye out on the roadmap from Red Hat. The reason that we don't is, well, initially, the StackRocks product was developed, I think, really prior to or at least in parallel with when gatekeeper wasn't as well-known and we didn't have that resource to go and choose from, but the other thing that we kind of gives us hesitation is that gatekeeper has a set of criteria that it can operate on, OPA uses Rego, and frankly, it can do more than the StackRocks engine. It can look at lots of things, including custom resources. There's a lot of attributes it can look at, but it can't look at things like we just looked at. It doesn't look at things like the contents of your Docker images, and so it doesn't have some of the features of functionality that we would need, but it, you know, since OPA is so flexible, it is certainly something that the team here at Red Hat is interested in supporting, but not yet, not today. All right, let's go do something really stupid. Let's go find that log4shell remote code vulnerability, and I'm going to clone this policy again. That's, you know, good practice because you don't want your policies to get overwritten. You can think of these default policies as more examples of what you'd want to do rather than, you know, specific policy to be enabled. You can certainly do that, but just better to get yourself into the habit of cloning these. Again, I'm going to leave the description, the rationale, default. This is a really important part of policies, though. If you're building security policies, we want you to think about, you know, involving the developer, right, the builder of this application in your security decisions. We want them to understand why this is important, what they should do about it. As one of my colleagues once said, ACS or StackRocks is a lot of carrot and a little bit of stick, and I love that, right? The idea here is to hopefully tell people about the issues, give them an alternative. When we start talking about some of the Kubernetes of controls that are available for Kubernetes data security, you know, we're teaching, really teaching teams exactly what capability they can make use of to get the better security, right? It's cool stuff. So this one is not cool, though. Remote code execution exploits are never good. This one requires so little from the attacker. Again, the remote code execution just allows someone to run arbitrary commands against your applications, if they're configured in this way, if they have this vulnerability present. So the world got its hair on fire. And it's a great place to start with something like enforcement, because it's pretty well known, it's really dangerous, and it is a specific vulnerability. What I like a lot about these default policies is that they're set to inform by default. So you get that nice output in the CI pipeline, it tells you, hey, there's a bunch of things you can do to improve your deployment, your image, but you don't have to do anything about it. It's kind of nudging people. There's that carrot again. But this one's a good one for the stick. We're going to turn on enforcement here so that people can't get past my controls. And so really just looking for those vulnerabilities. Again, I can combine it with other attributes, and we'll take a look at that next. But I'm just going to turn this on because I don't want anybody deploying anything new that has this vulnerability present. Now, you can see that no deployments currently have violations. And we saw this in the last one. When I have a build or deploy time policy in Stack Rocks, we're not going to start enforcing things that are already running. So I guess after the build stage is done, after the deploy stage is done, we're not going to enforce a build and deploy policy that you put in place after those things are already deployed. We want to be secure. We want to have policies, but we don't want this product to come in and suddenly shut things down. So you'll see that we're very careful about the kinds of actions that are even available in the product. If I were to turn this on, and I suddenly shut down half of my production environment, because those teams haven't fixed this thing, it would be some explaining to do. Even at runtime. So if we were looking at a runtime policy, runtime policies will not look at the currently running activity, but only future new events. So they're always about future new events. So that if you see ACS not enforcing something, that could be the reason for it. Anyway, now I'm going to save it. Now I have this code execution vulnerability policy enforced, and I'm going to go back to the command line. So now, you know, I can go look at image checks again. Let's look at that log one again. And now you're going to find, of course, that log for shell vulnerability thing is failing the build. But that's not that interesting. We saw this before. So I'm going to ignore that. And what I'm going to do instead, and I'm using OpenShift, I'm going to use the OpenShift command line. Let's see if I have a namespace here. I'm actually going to create a deployment. So this log stash YAML is just going to deploy that thing. So we're going all in. I'm just going to create this thing. And boom, I got a failure. So this is part two of enforcement. So as a bad actor, I want to deploy something against your Kubernetes cluster. Or, you know, as just a developer who doesn't want to be bothered with nasty things like security error messages. Now I can't ignore this. I'm going to see that, you know, that this was denied. And you can see exactly why and what this with this output looks like. And this is essentially the same information that I saw earlier. So it's best practice for me to get this at the build time. But if I don't, the emission controller here is going to get my back. Now, this is an important thing here that your ACS configuration may not have the same behavior, even if you turn the policy on in the same way. And that's because of the configuration we're being very careful with here in the cluster. So when you roll out your first cluster, if you're filling it out with the form, if you're using Helm to deploy it, you need to be aware of this feature called contact image scanners. It essentially is inline scanning. In other words, should we make the Kubernetes command line or the Kubernetes API commands wait in order for StackRocks to go scan that image? You might not want to wait. And you certainly can't wait beyond 30 seconds or so, well, 30 seconds exactly, because Kubernetes will time out. So we've got a situation here where somebody makes a Kubernetes command to create a deployment. The webhook arrives at the admission controller. The clock is ticking. We can't wait more than effectively about 27 or 28 seconds here in this timeout. We've got to return an answer back to the API command. Otherwise, you're going to see all kinds of deployment failures. So because this modifies the behavior of your cluster, we set it to being careful by default. We don't do enforcement by default. We don't time out. We don't contact image scanners. So the StackRock software installation has to be modified to get the behavior that you just saw. And you might want to pause a moment and think about that before you go ahead and do it. Once you're confident, of course, go right ahead. And then as you go in and enforce each individual policy, they're going to reflect that enforcement setting. Now, of course, we've only seen a little bit of this. There's more. One of the things that we've talked about a lot with ACS with StackRocks was that it's not just vulnerability scanning, right? That if all you're doing is vulnerability scanning, that's a great start. We want your teams to fix vulnerabilities. But there's so much more out there that they can do. And so one of the other options that we'll see is instead of just looking at the image contents, is actually to go out and do a deployment check. So that deployment check is going to take now as argument a YAML file. And so just like I had with my image check, I'm actually going to change this to be a deployment check. Say we're doing this live. And instead of having an image, I'm going to pass in a file. And now, because we've got the original YAML file here, I got a lot more stuff I can look at. And these are things that either like a pod surface account being mounted. This is a default, right? So I didn't specify to disable my auto mount surface account token. So it flags that as something that I didn't do. And there's a suggestion here. I didn't supply any resource requests or limits because this is a dead simple deployment YAML. I didn't put a CPU and memory limit in there. I'm using a fully writable root file system, which this is one of my favorite settings. This is something that allows an attacker who exploits my application through this log for J to write a payload to the disk, the virtual disk, the file system that the container thinks it has, and then execute that from there. Since your developers really shouldn't be using the root file system for anything because it's not permanent at all, we like to see everybody use a read-only root file system. So this has nothing to do with the Docker image spec. It has everything to do with the deployment YAML that I've built. And this output is designed to help your team understand what that is and what to do about it. You're not going to see anything in here that says anything about running a stack rocks thing. It's all about using the stuff that's already available, that if your teams are familiar with writing deployment YAMLs, they should be aware of the security context section. And you don't go in there and turn on root-only file systems. We don't do it for them because that's probably just going to break everything. And again, we don't want to break everything. Some of this stuff is hard to get to. We know that not every developer is going to have an acute awareness of how much CPU and memory they need. But these are the things we talk about when we talk about Kubernetes native security. So in this case, I did a deployment check against the YAML file. That is a local YAML file that gets updated to stack. I'll put it to stack rocks and checked. That's, to me, a more complete way of looking at it than just looking at your image. You will see at the top here that because the image is specified, stack rocks will go out and retrieve the image layers. We'll look at the components and we'll look at the policies related to image contents. So I'm going to take some more questions. We can look a little bit more, one more set of use cases for runtime security if everybody is not completely bored to death at this point. Looks like I'm boring everybody. There's no questions here. All right, let's take a look at runtime. I have not got this set up yet, so you're going to watch it live. This is fun. So when I look at the use cases here, you're going to see some things. I'm sure you've probably seen the network graph. You may have seen some of the components and the process discovery settings here, but the network graph is a good place to start. And again, you've probably seen that the UI is focused on drawing your attention to what's right, what's wrong, helping you search through this stuff. The network detection here is being used to understand when activity in a container looks unusual. One of the nice things about containerized applications is that they should, if your teams are doing things the right way, they should be pretty boring. So in this case, I've got a not boring application. And we're relying on that. This is one of the things that we look at when we're trying to figure out if something is malicious or not. Did an attacker get in, start making requests out to some external site? Did they start exploring the network? Try to find a place to move laterally? Did they try to connect to that database or what have you? I could also look at that here. This is a little more hidden. So if you haven't used the StackRox interface, go into risk, click on the details of any of the risk entries in here, and then over on the right in the process discovery, and you're going to find a couple of different views. One is this list view of all the running processes. This is essentially a log of every process that was created. You'll see that there's a pinkish color here that indicates, again, unusual. Stuff that we haven't seen before. It's being called out because containers should be boring. And if they're not boring, that's worth investigating. Down at the bottom, you're going to see this baseline. And the baseline is more or less there to tell me that this is what the container started with. And that allows us to do that compare and contrast with what the actual running behavior is. So in my case, you can also see here that we set this up, this demo environment here, to more or less to look at the behavior over time and alert us when something is unusual. Now, we're doing this automatically. We've actually scripted in here a simple exploit against the struts application. You don't have to go to those links. What I'm going to do instead, I'm going to go in and look at, let's say, back-end Atlas. And this one looks clean. There's, again, nothing really interesting happening. It started up. This is a Tomcat Java application. There was a few activity at the beginning. And there was a little bit of activity later on. This is, oh, well, maybe this is a little bit malicious looking. But it all happened at the beginning. And what I'm going to do here is deliberately go and disturb this. This is locked. So the baseline is here to more or less to set the trigger of these violations here. I'm also going to go into the policy engine and go find my policy and make sure that it is set to enable and inform. It's not going to do any enforcement just now, but it is going to show up when I have this stuff. So let's go violate this rule. I'm going to go find the pod. And I'm going to exec some commands in it if I remember my syntax correctly. So once I get in here, just running a shell, immediately I'm starting to run commands. And these are going to show up in the UI of the StackArcher phase. It may not happen instantaneously. So you'll see it sometimes will take a second. But you should see, first of all, the little red indicator here, you start to light up. And you should start to see the process discovery happening. So in this case, you know, I ran cat. It's password. You can see the arguments here. I ran bin sh. Because I ran an app called dpackage. You're seeing the process history behind all this stuff. So a lot of this will cause violations immediately. I didn't do anything about it. Here's my package manager execution. Here's my unauthorized process execution. But I can do something about it. So I'll do something really simple, like we'll get like adding a user, right? Really simple thing. Nobody should be doing this in your environment. Again, I probably should have blown the policies. I'm not going to do that in this case. I'm just going to turn on the enforcement because I'd like to live dangerously. And enforcement at runtime is going to kill the pod. And a little bit more about that when I show it to you. I'm just going to leave it as is. This is a really simple policy. It's really just going to look at the name of the process. In this case, I am going to add an inclusion scope because I don't want to start exposing any problems. I mean, really, nobody should be adding any users in my environment. But I don't really know. And if you're just getting started with StackRox, I really don't want to be the, you know, the person who brings down production by installing a new tool. So let's just stick to the backend environment here. Again, again, there's no violations. A runtime violation still will be tracked from the point which I could say is default. So I don't have to redeploy. I don't have to do anything. In fact, I'm just going to go back to that existing shell. You know, I can run, apt update again. And again, the, you know, ACS is looking at all this stuff. It's like watching the processes go by. But it's only when I actually want to try to add somebody else that I see a reaction. So in this case, we're seeing any enforcement. The process actually starts. I got a permission to deny it. That's good. But you can see that I'm now going to get out of the back of my mind. And I can, I'm sorry, I'm not in my exact staff side of the pod. You know, I can go in, I can get my pods as a developer. Maybe I don't know what happened. All I was doing was exacting me into a pod. We did, we did want to develop devops, SREs that you just didn't know what had to happen. If you look closely, you'll see the pod, you need famous change change because we killed it a little bit. You can also look at events. But that's enforcement for staff, staff socks. We're not interrupting the process. And that's the only option that's available right now with staff socks. Interrupting the process is a pretty tempting thing. But it's disruptive, disruptive. We don't know why somebody's running that. We don't know the full story. We need to assist somebody going in to go and retrieve a lot of data. We'll look at some database whatever they're used to use. It could also be an attack. But I kill a lot of people out of our environment. We're an attack. We don't just want to process. We get everything. Anything that they were to do, if we missed something, we call it many, many things. They were doing one of their attack, then this would lighten them up and affect them. So that's what I'm talking about. I'm happy to answer any questions you might have. Let me know if there are some documentation on both the site site. It goes through some samples. You know, running pods, collecting data, printing things, building vulnerabilities. So any more questions? If you want to talk a little bit about a data keeper, say more on that coming up. You know, I know there's a lot of interesting studies that you'll see somewhere about future projects that are exciting to be able to come up on air. And our gracious host has posted the link for the site. We just give you a slack check January. So if you have questions for me or anybody else in the community, please come and join us. Since there are a lot of questions, I'm going to call it a day. Last chance.