 Let's get into the big topic of open source, something that we actually have in front of us. This is so awesome. We are an open culture that is actually in place. It's that process that a developer or, let's say... As the Kubernetes ecosystem really brings... Welcome to this week's Ask an OpenShift admin office hours live stream. I am your host, Andrew Sullivan, technical marketing manager with the OpenShift business units. And I am joined, as always, by my co-host, Mr. Johnny Ricard. Good morning, sir. Hey, how are you? It is a beautiful Red Hat day. As I was saying just before we started, right? Every day is a feast. Excuse me, every meal is a feast. Yeah, I'll get this right someday. You got it right when nobody was watching. Yeah, it's my first time. So yeah, welcome everybody. This is one of our office hours series of live streams here on Red Hat Live Streaming, which means that we are here to answer your questions. So anything OpenShift related that is top of your mind, anything that you would like to ask us about, we're here to help answer those questions. Whether or not those are related to today's topic, which happens to be security and OpenShift. And I am very happy to welcome someone who I have been wanting to have as a guest for a long time, Kirsten Newcomer, to talk about closed-loop security with Red Hat and with OpenShift. Kirsten, if you don't mind introducing yourself. Sure, hi folks. Great to be here. Kirsten Newcomer, I'm director of security product management for the OpenShift team. So that includes Container and Cube security. Of course, Red Hat advanced cluster security, which we'll chat about more as we keep going. But all things security for cloud-native applications. Yeah, you know, it's just a small topic. It's just a little thing. No big deal. It's just automatic, right? We don't have to do anything. So yeah, this is one that, you know, we're with 64 episodes in. I got to look at the little banner at the top of our screen here. And we've touched upon and flirted with security as a topic. But as a specific focus, we haven't. And it's something that I think has been long overdue. And I'm looking forward to today's topic. And looking at our outline, there's lots of really cool stuff that we're going to be talking about. So as always here at the start of the show, we have our top of mind topics. And I'm actually, I was not expecting. I think we've got like, what, five, six of these, Johnny? Every week, every month, or Wednesday morning at like 9 a.m. my time, which is 8 a.m. Johnny's time. And he's kind enough to reply. I always send him a message and like, hey, what are we got any topics for top of mind? And so this week, we definitely came with a few, although I think they're pretty, most of them are pretty short here. So a couple of kind of logistic things or upcoming red hat centric things. So one red hat summit is coming up about five weeks now, May 11 or 10th and 11th. It will be a hybrid event. So there will be some folks on site in Boston, as well as online. So I believe that that is a Wednesday, Thursday, Monday, Tuesday or a Tuesday, Wednesday. I don't know, I'd have to look at a calendar, but we won't be streaming that week. So just mark your calendars. Please go attend summit. See all the cool stuff that they got going on there instead of coming and listening to Johnny and I prattle on. The other thing that I wanted to talk about that are kind of logistics things is next week. So next week on the 14th, we have the what's next or roadmap session for open shift. So that's Thursday at, I believe, 10 a.m. Eastern time. We'll have the product management team on to talk about the open shift roadmap and all the things that you can expect to see over the next few releases. So always a good time. Johnny and I will be here to help answer questions. I think Christian is planning on being on along with a handful of others. Usually Mr. Mike Foster is the one who manages that behind the scenes, which coincidentally he is the ACS TMM. Let's see, moving on here. Oh, a follow up from last week. So last week, remember we talked about two topics. So one was how to replace a failed control plane node when using assisted installer or really any non IPI deployment. And the other one was how to create a customized Grafana instance. So for that first one, for that replacing control plane node, you remember when I was going through the demo and we got to the point where I was joining the node back in and like, why did it get this random IP address? I created a DHCP reservation and I just kind of ignored it and pushed on and surprisingly it still worked. So it turned out as I was, you know, troubleshooting and kind of looking into what happened there. Peter Latterbach and I were chatting and he was like, maybe you have a second DHCP server. And, you know, like a brick smacking in the face. It was, yeah, that's exactly what happened. I had my normal DHCP server for whatever reason, wasn't doing DNS updates for DHCP clients. So I had temporarily turned it off and moved to the router DHCP server and never actually fixed that or did anything when it. So when my helper node rebooted and DHCP restarted, I now had two DHCP servers. So awesome. Yeah. So yeah, if, if you were paying attention last week, you know, if you saw that fiasco, which again, surprisingly, it still worked. That was why I had two DHCP servers since they were competing with each other. Yeah. Alosa Doug, it was not DNS surprisingly. And actually I just, I swapped out, I used the, I use a coffee cup with a lid when I'm streaming. So I don't make a mess of myself. I had my coffee cup, my DNS coffee cup this morning. It has says if DNS had a face, I would punch it. It always makes me think of Christian Hernandez. Let's see. 4.8 to 4.9 upgrades are still blocked. Let me, let me share my screen here. Cause I happen to have a, nope, not that one. There we go. So I happen to have the security advisory here or the bug fix advisory rather. So just a reminder to anybody and everybody who happens to be listening to us. Anytime we block an upgrade path, we will release an errata that announces that. So this is the errata 22 colon 10, 86 notification of delayed upgrade path to 4.9. So this one is relating to the at CD issue that we talked about last week. So just a reminder that this one is still ongoing. If you're trying to do upgrades from 4.8 to 4.9, you'll still see those blocked. And of course I always recommend the folks that you subscribe to errata. Unfortunately, I don't know of a way to filter these so that you only get specific ones. But if you go to access.redhead.com slash errata, and then you click, so I am logged in here. You won't see this if you're not logged in, click this notification preferences. And then it will, you know, say, Hey, what do you, what do you want me to send you? You see, I have mine disabled because I, this is my developer account, not my normal account, but normally you would want to get those. I always recommend to folks to use client side filtering and search for like that string that I showed you a minute ago. All right, come on. There we go. So you should be able to search for like delayed upgrade or upgrade path, something like that in the subject. And that should filter just the ones that are affecting any upgrade or blocked upgrade paths. Moving on. So I strangely, and the questions seem to come in waves. I don't know if you've noticed that, Johnny. Oh, yeah. There was, I don't know, like three or four or five folks who asked some internals red hat, some external red hat, like, Hey, how do I install a specific version of OpenShift? So the way to do that is with the OpenShift install version that you use. So if I go to access.redhead.com, and products and services, cloud computing, red hat, OpenShift. And then I scroll down here and find the, oh, why can I never find this? That was not what I wanted. I want to come on. I want to download it. There we go. So if I click this download latest here, this takes me to the standards, you know, download interface that we've all seen for rel and for every other red hat product for ages. But if I click the down the, this button here, the dropdown, it shows me all of the versions that are available and you'll notice that there's for rel eight and for rel seven. So the reason why I suggest people go to this interface is because you'll notice, for example, there's no 10.4.10.2 or one or zero or anything like that. This only shows you versions that are GA. So versions that's never made it out of candidates or anything like that are pre-release versions. They don't show up in this interface. So if I wanted to, for example, install 4.9.13, I simply click here and then I can come down and download the client that I need, which is just OpenShift install. Sorry, client is OC installer is OpenShift install. So I download my client and my installer. And then when I install the cluster, when I go and do OpenShift install, you know, create cluster or whatever, it'll install that specific version. So it is possible. You can, of course, go to the mirror. So mirror.openshift.com and OpenShift E4 and then what is it? Dependencies, clients. Clients, yep. I do the same thing every single time. OCP. So if I go in here, you'll notice that all versions are inside of here. So like here's 4.10 and you see I have zero, one, two, three, four, five, six, seven, eight. So 4.10. I didn't want to click on you. I wanted to highlight you. So remember these three were never GA. This was the first GA version. And then I believe that these two are candidate releases. If I remember correctly, they're not the current GA release. So I guess we can check that. Just going to console.redhead.com. OpenShift slash releases. So 4.10. Yeah. So 4.8 is in candidate. It's not a GA release yet. And 4.10.7, it looks like it's going to get skipped because it's not in fast. So it looks like it was, you know, dropped from being promoted. So it's one of those like, yes, you can go to the mirror, but you just have to be conscious and careful of which version you're downloading because you couldn't have been in a situation where you accidentally deploy an unsupported version. And if you're pulling the core OS image, I understand the core OS version doesn't necessarily correlate to the OC or OpenShift install version. Like they don't necessarily release at the same cadence. Yeah, that's a good point. So if I come back here to the mirror and go to dependencies and core OS and 4.10, you'll see that there is only one 4.10 core OS release. This is perfectly normal. And in fact, if we go back to like 4.9, something that's been out for a while, you'll notice that there's still only one for 4.9. They only release a new core OS image if there's something that affects the image. Normally when the install, what happens is we, you know, it deploys with 4.9.0 in the instance. And then the first thing machine config is going to do is push the current version or the version that matches the cluster install to that host. So thank you for that reminder, Johnny. Yep, no problem. And then the last one that I'll mention here is stack rocks. So Kirsten, this is one that is near and dear to your hearts. And I know that you have been participating in for quite a while. And that is stack rocks, which is now known as Red Hat Advanced Cluster Security for Kubernetes, I think. You got it. Yep, is now open sourced. So congratulations to you. Congratulations to the team. I know there's a lot of work that went into this. I've been keeping up with Michael Foster on the back end. And this is a big one. So yeah, I don't know if there's anything that you want to say about that. No, it just we're really excited. Stack rocks with the upstream community is still called stack rocks. And stack rocks.io is as is being shown there is the home page. Mike Foster is our community manager. So, so, you know, great thanks to Mike for doing that. We're super excited about, you know, working with the broader community. Also, there are some contributions we'll be making back to, to other communities such as Falco. So really looking forward to this, to this collaboration and the opportunity to, to be on the team. And the feedback we'll expect to get from that. So excited about it. Awesome. Yeah. Yeah, I saw a post recently on red on a Reddit where somebody was asking if it was, if it had been open sourced yet. And my response was no, but keep it a close eye. Maybe tomorrow. I also noticed that in the chat, there is a question about if I haven't seen it, if I scroll past it. I believe there was a question about stack rocks requirements, hardware requirements for installation. So they're basically the same as red had ACS kind of the, I'm just posting a link, which I hope in one of my colleagues will put into chat to the ACS installation doc. So really it's, it's, you know, the same. If there's not enough information in that doc for you, we can follow up afterwards. We do have a separate operationalizing ACS doc. But in the near term, I think that'll get you started. So. Yeah. Yeah. That was a rock out. That was asking. All right. Regarding the new security stack rock solution. What would you rate as hardware requirements? With cores, Ram, et cetera, to run. So thank you. And Stephanie, our, our producer in the back end who does amazing work. She is depending on what platform you're, you're watching us on. She's either red hat open shift or I think it's open shift on YouTube or red hat on the red hat YouTube channel. So if you see her, or if you see that, that account posting, that's her. And thank you for all of your help in the background stuff. Yeah. Something else I might say about if you're, if you're going to install ACS or stack rocks on open shift and, and for rock count for your home lab. Right. So there are the UI is installed as central. You could install that on an info node. If you actually have an info node for open shift, you could have, if you are playing around with ACM, you could probably install ACM on the same info node with ACS. There are other components of ACS, one of which needs to be installed on every node that you want to monitor. And that's in particular the collector. So the collector is an EBPF or kernel module. It's going to be monitoring Linux processes and feeding that data to ACS. That does need to be on every node that you, that you want that security on control plane worker, you know, nodes, info nodes, whatever you decide. And then the other components, there's a sensor, which you, which is used to actually kind of all the data that's collected from secured clusters or fed through sensor. There's also the scanner. If it's a local open shift cluster and your images are being stored in the internal open shift registry, rather than an external registry, the scanner will also be deployed on your cluster so that it can access images that are stored in that internal registry. Just to be clear, that's, that's different than something like Claire. So the ACS scanner is actually, so this happens in our world, right? Remember that stack rocks is an acquisition. The ACS scanner is actually a fork of Claire v2. Red Hat quay ships with Claire v4. And one of our projects, one of our roadmap activities this year is to get to a single clear based scanner that will be available from Red Hat. So it'll be, you'll see that we will be the ACS stack rocks team will be contributing code into Claire v4 because that divergence happened. There's a fair amount of refactoring that's going to have to happen. Today ACS can cover more language level vulnerabilities than Claire v4 does. So we're really looking forward to getting to one place where we can, you know, one scanner where no matter where you're accessing the Red Hat scanner, you get the Red Hat version of Claire, you get this coverage. And if we circle back to spring for shell for a minute, I know one of the links that was shared was a mention we're working to get updates into the ACS scanner database for discovery for spring for shell. Internally, we've had a few glitches in our release pipeline. Otherwise, this would have been out already, but we're still working to do that. And I don't know if you intentionally segued into spring for shell, but we do have a question from Kumar about OpenShift was listed as affected, but now it's been removed with only serverless being affected. Correct. So that is the latest update. And in fact, so I did notice the questions and we've shared a couple of links about those. So one of them is to the Red Hat security bulletin, which is always updated as we figure out new things. So it was an initial mistake when they thought full OpenShift core was impacted. They realized absolutely not. Our product security team digs in, but serverless functions, the tech preview version of serverless functions for spring cloud is affected, as is also Red Hat decision manager, process automation manager, enterprise virtualization for AMQ, 6.3, 7 fuse, 6 and fuse 7. And again, Red Hat ACS. We are working to add the data to allow you to discover any other any vulnerable images that you might have on your cluster or in your registry using the Red Hat ACS scanner. Very cool. And just to make sure I understand. So Claire, which is a part of Quay and multiple other registry instances, I think harborships with Claire as well. It's an upstream project. Yep. So it scans any images that are put into that registry, whereas ACS's image scanner can scan kind of any image, including and some of the really cool demos that I've seen lately are, like I have a pipeline and I'm building an image. Before I even push it into the registry, I can use ACS to scan that image. Well, so actually what ACS is doing when you're scanning in a pipeline, so absolutely you can scan using the Red Hat ACS scanner in a pipeline. However, what it's going to do is it's going to connect to the registry where you're storing that image in the pipeline. That said, we have on our roadmap the ability and we're shooting for it. It'll probably be, might land late 2022, might be early 2023, the ability to use the ACS scanner to scan an image on a developer's local desktop. We really want to shift security even further left. So when people think DevSecOps, one of the first things they do for the DevSec part of that closed loop is they add vulnerability scanning into the CI CD pipeline. That pipeline does include a registry. You need a place to store the container image that you're building. We want to shift that further left so that even if you're just doing a build on your desktop or your laptop, you can run that ACS scanner there, get that early information, and use that vulnerability. Ideally you're able to fix the vulnerability right then before you push it to the registry. But even if not, what you learn can inform choices out whether or not the image is production ready. Very cool. Yeah, that was something that was always, I didn't know that it was still using the local or the internal registry for that, but even still, that's a super cool. ACS can connect to, you have to give ACS the credentials to connect to whatever registry you're using with your cluster. That could be an external registry. It could be Artifactory. It could be Nexus. It could be Red Hat Quay. ACS, it could be Amazon ECR. ACS doesn't care which registry. You just need to give it the access so that the metadata, the content can be pulled manifest. That's the word I wanted. The manifest can be pulled and we can collect the data needed to map to vulnerabilities. And I'm sorry. You can ask me another question if you want, but I love this theme of security and obscurity. So at some point, I want to address that. It's shown up twice now. Yeah. John and I both used to work for the government. So yeah, it's a concept we know and love. Oh, yes. So our hope nine asks, does the scanner work with a distributionless image or one that has the from scratch? That is a great question. So right now we aren't great with scratch images or distro-less images. The scanner needs, there is work that needs to be done there. And again, that is something that is on the roadmap. We don't encounter it a lot, but we do encounter customers using scratch or distro-less images. Like many scanners in this space right now, we're kind of leveraging package related data as part of our mapping to known phones. So to do the distro-less, we need to add some additional analysis to pull, you know, to pull from binaries where there's no package data available. So. Yeah, that makes sense. And I'm sure it slows down the process because you don't have this, you know, with a from known base image. You already have this catalog because you scanned it once and now you can check against that. All right. So we've kind of already launched into today's topic, but in our hope nine, I see your question there. We'll get to that in a moment. But I want to officially launch into today's topic, which is again that closed loop security with OpenShift and Red Hat. And this is the first time that I've heard that phrase coming from you and from your team. So I'm curious what that means and you know, kind of, again, security is this big topic and it's baked into every level, every aspect of OpenShift and what we do and you know, John and I have brought up many times, you know, hey, there's this CVE, but you know, OpenShift isn't affected because SELinux or, you know, SCCs or it's mitigated because of those things and stuff like that. So I'm very interested and curious to hear about that. Yeah, so, so the reason I like the phrase closed loop security is again, DevSecOps popular topic, something we hear about a lot. And I, in many of the conversations that I have with folks, again, as I said earlier, it's kind of like, I hear many people think of DevSecOps as what I've started calling DevSec. That is shifting security left, adding some security gates to the CI CD pipeline, but that's only part of that infinity loop that is DevOps or DevSecOps. So what you really want to do is you've collected data in the development environment and with ACS or StackRocks, not only can you do vulnerability scanning, but you can do app config analysis as well. You can look for access privs in your deployment data. You can catch issues early and you can guess what you find in the CI CD process to set security policies that are applied at deployment time with admission controllers and other security policies that may be relevant at runtime. And so that SecOps part, that second part of the loop is also important. So you can inform your policy creation with what you learn at runtime and what you, I'm sorry, with what you learn in the CI CD process and what you learn at runtime absolutely should inform back to the development team. An example of that with ACS today is that ACS will do an analysis of the network communications of your running pods and it can auto suggest ways to tighten those policies and it can also help you apply those policies. It can simulate what the communication path would look like with that auto suggested policy and then help you apply those if you wish. So that's also a place where depending on how the policies are defined and who in your organization is given permissions to do those definitions, again what you see in runtime can inform back to the developer loop. We also have on our roadmap working with some of our IBM research colleagues looking to shift the generation of network policies left as well. Do some analysis of your app config and your app data early on to define, to propose a network policy that would work well in runtime. And I see this mention of zero trust. For me, zero trust is an even deeper level conversation, right? There are so many layers but maybe we can use that, Andrew, to circle back to what you said about the layers of security and OpenShift because when we think about it, there are a lot of different layers that need to be considered, right? So you mentioned Desi Linux, right? So Relcore OS container optimized operating system only has the content needed to run OpenShift. Therefore, it reduces your attack surface. It's SELinux on by default. SELinux has demonstrated the ability to protect against or mitigate the known container escapes to the file system. Security context constraints in OpenShift which allow the administrator to control the privs a workload can run with. By default, a regular user cannot deploy the privileged workload to OpenShift. Of course, a privileged user is a little bit different, right? That's another scenario that has to be looked at carefully. Built-in audit logging, both for the host layer and the API server layer. Kube network policies for micro segmentation. You can implement zero trust at the network policy layer by starting with a deny all policy for Kube. For your Kube network policies now, is that really where you want to start? I'm not sure. Most people are more likely to start with I'm going to allow communication between all pods in the same namespace or OpenShift project. That's a more common pattern, right? I think you're touching on, you know, going with that, like a default deny all, right? To me, that's touching on, I think, one of the biggest frustrations that many people have coming from another Kubernetes into OpenShift. And that's things like no root in the containers, right? And no privileged containers. And like there's an awful lot of things that quote-unquote just work in other Kubernetes because they're insecure. And I don't know if there's any tools or any recommendations that you have for bringing those types of applications in. Yeah, that's a really great question. So, as you say, by and large, there are a combination of things that lead to what you just described. And certainly it's absolutely true. OpenShift out of the box is more hardened than the majority of the Kube distros that are out there. I'm not sure I'm aware of one that is more hardened than OpenShift. Doesn't mean there isn't one. So, and it's largely security context constraints, but also the SE Linux piece and the way that they work together. So, the challenges that often workloads request privileges or that maybe they do or don't actually need, but they're just used to requesting them. Sometimes the workloads are used to, especially if somebody's containerized a legacy workload. There may be some requirements or some interactions where they expect to run with a particular user ID. And you can't do that by default on OpenShift. If you're a regular user, you have to kind of have a special SCC to do that. So, one of the things that we're working on, so really best recommendation, use something like KubeLinter or ACS to analyze your deployment configs to look for what privileges are being requested by the workload, what Linux capabilities, user identification, anything like that, right? It's in that config data. Security context, a whole range of places to find it. Use one of those tools so that you can analyze in advance whether or not your workload is going to be accepted to be run with the restricted SCC on OpenShift. There's also some additional work that we have kind of handed. There's an upstream project called the security profiles operator. That will help you if you know you need privileges. The SBO can help you define SC Linux profiles that will further contain any concerns that you might have related to the fact that workload is running privileged. That is awesome. Yeah, and Johnny, I know you have many war stories as a consultant around, especially the specific UID, right? OpenShift designs, right? There's a range of randomly assigned UIDs for each namespace. I came from the storage realm and especially NFS. With NFS, now I have random UIDs. I used to get a lot of frustration from folks because it doesn't work the way it should just work. I just wanted to work out of the box. I don't have to think about it. Where's Christian? Is Christian still listening? Yeah, he is. He has stopped disabling SC Linux t-shirts. It's one of those things. A lot of people, step one after I finish my deployment is turn off SC Linux. Every time you disable SC Linux, you make Dan Walsh cry. I know I've seen that one. But there is such a t-shirt or a statement. In fact, the truth is, we actually don't support OpenShift with SC Linux disabled. Your workaround, if your workload isn't going to work with SC Linux on, there are, I think, nine different SCCs that we ship out of the box. The most privileged one will allow you to work without an SC Linux context. But you really don't want to make Dan cry. Exactly. You just don't want to do that. There are other options. And it's interesting. Some of the other workloads that can be challenging are workloads in the telco space where they need to talk with hosts. There may be secondary network interfaces that they need to interact with. And that's a whole different level of privilege. Again, but you can still limit it. That's one of the values of Linux capabilities. The challenge, though, is that not every developer thinks about or works with Linux capabilities at that level. And they also may not know what system calls their app needs to make, which is one of the reasons we're investing in the security profile operator. How can we help provide some automation that informs, provides additional security to those more privileged workloads? I would definitely say that in my experience, it's been that part of the conversation. What does it mean if I do any UID or if I allow a host networking or whatever? Just going back, it's the possible. It's what could happen. It may not be what's happening right now, but it's what could potentially happen if something gets compromised. Then you expose everything. It's that lateral movement. What can you do to, how do you mitigate that best? Again, contain it from getting, keep it from getting to the host file system, which is the key role, a key role of SE Linux. Then if it bypasses, if you've bypassed or it has the privs and it gets to the host file system to the node, then you're in a position where you can have node to node. You get lateral movement on the cluster. That's something you really want to avoid. What's that? I think Microsoft calls it warmable. I can't spread like that. I do appreciate that. You have multiple times you've said hardened. It was something that I learned from Mike Foster that I think Mike learned from you. There is no such thing as a secure system. This is true. There is hardened systems that are, that make it difficult or a much smaller opportunity to compromise. It's always a determined adversary can almost always find a way. As we've been seeing with log for shell and now spring for shell, new vulnerabilities are discovered regularly. Oftentimes these are just, or even if we think all the way back to meltdown inspector, these are sometimes vulnerabilities that have been around for a long time, but were never noticed, weren't exploited, suddenly become exploitable. And as an industry, we scramble to respond. So we absolutely want to think about how do we harden and how do we provide protection that can mitigate in the case of escapes. So certainly there are environments as is being kind of mentioned in the chat, right? There are environments that are truly, not just disconnected from the internet, but you just can't, you know, anybody, somebody said locked at the vault, right? Air gap, that's the phrase. Thank you, Andrew. And at the same time, you still have to update those systems. That's typically done by something like sneaker net, right? I'm actually wearing slippers at the moment, not sneakers. I was going to show my feet, but... Slipper net. Exactly, slipper net. But I still have to get that update from somewhere. I have to analyze that update before I apply it to my air gap system. And I have to be sure that there's no tampering on whatever device I'm using to carry it from that analyzed environment to my air gap system. That's no... Yeah, and it's funny you mentioned that. So many moons ago, I worked for a Department of Energy site. And as we were going through, as I was going through my onboarding training, they were talking about, you know, computer security. And they brought up the instance of, I think it was, I ran, where they just dropped somebody who was just dropping USB keys in the parking lot. And of course somebody would find it and be like, oh, I wonder who dropped this and go and plug it in. And they plugged it into an air gap system. Yeah, stuck net. Yeah, stuck net. So it's one of those like, even an air gap system is not, thank you, rock count as well, is not secure, you know, necessarily. Revol USB guard for those scenarios. Yeah, they're handing those USBs out at conferences. And the same thing dropping it in parking lots and just kind of like, just a huge net casting to see who would actually do something. And got lucky. Yeah, so we do have a couple of questions. So Alosa dog apologies for anybody to anybody for mispronouncing names, user names, et cetera. Is ACS capable of analyzing user created SECs? So at the moment the analysis is that I mentioned earlier that config analysis that's focused on the deployment configs, helm charts, those sorts of things rather than SECs. So it's really looking. So if you, you know, SECs, OpenShift thing, security context constraint comparable to Kube, pod security policies, pod security policies have been deprecated, pod security admission is the new thing. We'll still support SECs. So SECs are cluster objects. And yes, an application has to have the privs to access one of those SECs, but SECs are managed by the administrator. Even if the app, you know, the app developer provides that custom SEC, the administrator still has to get it onto the system into the namespace, the app is going to be deployed and has to, has to be configured so that the app can actually have the privs to use that SEC. So ACS today is not looking at those objects. However, I think that's an interesting idea. It's something that we could consider. The question though around that is that those really are not directly tied to applications. They're indirectly tied, whereas deployment configs and helm charts are directly tied and will ask for certain privs. And so analyzing an SEC will tell you, you know, what I have on, you know, what kinds of privileges a workload might be given on a cluster, but until you make that connection between the SEC and the workload that is allowed to use that SEC, you don't have the direct connection. So it's, I'm not sure that analyzing the SECs themselves is going to get you what you're looking for. But it's an interesting idea. Yeah. Yeah. And it brings up a, I think a tangential question for me, which is if we have the OpenShift Security Guide and all of that other stuff, do we have a, for lack of a better term, a set of best practices or recommendations around security practices for things like that and things like, you know, I think the anecdotal recommendation has been, you know, don't create custom SECs just out of convenience, right? Going back to the whole, you know, my application must run as this user. So I'm going to create an SEC that allows that so that I can enable that maybe bad behavior by the application, right? Instead, let's fix the bad behavior so that it's, you know, can run within the correct constraint. So I feel like there were two questions there. And I'm not sure I caught the first one. So regarding SECs, again, they have to be, you have to have the right level of, of privs on the cluster to create an SEC that could either be cluster admin or project admin, as we call it, right? If it's, if the SEC is, is going to be associated with a project, but, and in terms of, sort of if we circle back to your example about storage, Andrew. Oh, thanks for, thanks for whoever shared the, yeah, Andrew shared the security guide. That was the first part of the question, right? What advice do we have? So it ranges. And the thing is that it doesn't always get to the level of detail that you're just talking about. The best place that, that red hat, although again, that's something that we could look at doing is where can we add this into documentation? So there is detailed documentation on the SECs. There's information available about the privs required to create new ones. The reason we recommend that you create a custom one, if one that's out of the box doesn't work for you, is because an update is going to replace a modified default SEC. Frankly, I wish that we had not allowed our defaults to ever be modified, but by the time we realized that we should have prevented that, we were so far down the road that we couldn't do it without potentially breaking people. So, so we haven't drawn that hard line. Recommendations on SECs can be really complicated. I actually, as Red Hat, myself and a number of our colleagues, we've done some of those investigations. It's come up for SECs for storage more than once. It's come up with some of our other, some other complicated workloads. I, there are blogs. There's a great blog. I'll try and find the link that talks about the NEUID issue and the interaction with SE Linux. I'll try and find that link for you all. If not, while we're talking, we can share it later. But I think it's, maybe we should have a white paper that, that kind of digs into that. There, there are other types of guidance available, but the SEC question is so tied to application needs. But nonetheless, I think we've been doing enough around that in recent years that we could probably do a white paper on it. Interesting. I just linked a blog. I think that's the one. Okay. Is it Alexander Menendez that looks like the one? No, it's William. William Caban. Yeah, there are, but they're actually in a couple of different blogs. So, yeah. And, but it does say OpenShift and UIDs. So that might be one of them. Yep. And I think this also flows into a question that our hope nine had asked earlier, which is, you know, can ACS or any of the OpenShift tooling identify, and the core of his question is effectively identify bad practices, right? So his specific question was, can it flag lazy builds or those that should really be a multi-stage build? But I think that there's other things encompassing that as well. So lazy builds, no, I don't think ACS is going to, is going to flag lazy builds per se, but ACS does have out of the box policies that will notice crypto mining, that will notice an alert on exec in a running container. It will check for package managers in running containers, right? In many ways, you know, some base images include package managers. And so they just kind of wind up in your app because they were in the base image. But you, it's really better if you don't have a package manager in your running image because that's a, that's an exploit path, right? And so there are a whole series of known bad behaviors that ACS will analyze for an alert on. And some of those it's not. So again, if you think vulnerability scans, generally I'm looking at a static set of data. It's an image, a manifest file, known set of vulnerabilities. ACS, the known set of volumes changes regularly, but that's still kind of not within context. ACS runtime analysis can go beyond that and add context that includes the kinds of things I just described, crypto, you know, crypto mining, package manager present, exec in the container. And also we'll do a runtime baseline analysis of your deployment. If there is a significant change in behavior, it will be flagged as anomalous behavior and an alert will be sent. So there's kind of this initial collection of data that says, okay, we've looked at this for an hour. We think this is the expected behavior for the running container. And if that changes, we're going to tell you about it. So there's a lot more. And also if you're looking at bone data for an image that's deployed to the cluster, we can tell you also whether a package that has that vulnerability is actively used by the application, based on that observation or not, if it's not used, take it out, right? Interesting. See what you can, you know, get rid of it. Why, you know, reduce your attack surface. It's funny when you mentioned package manager and why that's a bad thing, you know, it's a path for exploitation, of course. The first thing that came to my mind was Andrew doing a lazy administrator and, oh, this isn't working. I'm just going to, you know, connect into the, into the running container and install the package. And then I'll fix it later in the image build. Tomorrow's problem. Yeah. And of course, you know, that's never best practice because as soon as that pod goes down for some reason, it's going to be deployed from the image without the fix that you just made. Even if you haven't, you know, so until you fix that image, you know, anything you do in the runtime environment is short-lived. Yeah. So I want to take a moment. I want to talk about the compliance operator and the profiles that are there. Because I think that, you know, we've talked about it in bits and pieces and other places and stuff like that. And I think it's important to highlight that, you know, well, thus far we've been talking a lot about application layer and kind of platform, you know, things running inside of OpenShift. There's still like CoreOS and all of the stuff that's happening there. So I wanted to get your side of things with regard to the compliance operator. Sure. So one of the things that Red Hat has done for some time for RHEL is delivered automation that helps customers configure a RHEL server to meet the technical controls that are applicable from a regulatory framework to RHEL. And we do that with a scanner called OpenScap. That's a NIST certified scanner. And we've applied the same principle in technology to OpenShift. So the OpenShift compliance operator runs the OpenScap scanner under the covers. And it also, it's going to evaluate the Kubernetes layers of OpenShift, the control plane, the core components. And it's going to evaluate RHEL CoreOS as well. It does both layers against policies or profiles that we ship that can be run by the compliance operator. So today, we have profiles that allow you to analyze against the CIS OpenShift benchmark. CIS is now doing per distro benchmarks for CUBE. There's no longer just one CUBE benchmark. So there's the CIS OpenShift benchmark. There is the set of controls from NIST 853 that apply to FISMA or FedRAMP moderate. There's profiles for that. NERC, we were talking about electric utilities earlier, right? So NERC is a regulation for electric organizations. We have a profile for PCI DSS for those in the financial industry. I always, oh, a central eight for anybody who's in Australia and needs to conform with the Australia central eight. And so generally the profiles have two named parts. One is checking the node level and the other is checking that Kubernetes layer. You can run these scans out of the box. The majority OpenShift is configured to meet the majority of the CIS benchmark recommendations. There are some we think it's important that a customer choose. You can do encryption at many layers. Do you do it if you're running on a cloud provider? Do you do it on the S3 bucket? Do you do it on the rail CoroS disk? Do you do it at SED? Do you do all three? What's the performance impact? How you manage that is really something a customer needs to decide. But once you know which framework you care about and you want to be sure your cluster meets, you run the scan, you get feedback. The great dashboard in ACS with these details, Compliance Operator comes with OpenShift. You can run it, but there's no UI. It's just kind of CLI output and controls that allow you to do a human readable report. You could use the CLI to generate the human readable report. If you want that nice dashboard, that's in ACS. But also once you've decided which controls, if there are some that are out of compliance and you want to turn those on, you can automatically remediate using the Compliance Operator. You can also tailor a profile. It may be that you want to turn on some controls, but not all of them. It's come a long way. I remember when it was first coming out and we did a hackathon on it. I think it just did the scanning and reporting and there was no real remediation at that time. That could be. Yeah, it's awesome. I'm glad that it's come this far. Yeah, and we're working one of our other roadmap things is with ACS, we want to make it easier to... We're going to do kind of reports, trending reports out of the Compliance Operator so you can see changes over time. We'll store data so you can see trends. And then we'll also be able in ACS to do workflow through the ACS UI. That's awesome. You mentioned earlier something about Falco. Is ACS working with Falco or looking at a great Falco at all? The runtime behavioral analysis in ACS, I believe is different from... Falco is a big project. It's got collector libraries and it's got its runtime behavioral analysis. So the ACS team is going to be contributing to the Falco collector libraries. Conversation is already happening. Right now the runtime engines, the behavioral analysis are not the same and so that's an area of conversation and collaboration and we'll see whether over time, maybe we get to a single runtime engine. Yeah, that'd be cool. No, the Falco project, it's a great project, really appreciate our colleagues in the open source community working on that project. Yeah. So I know we're approaching the top of the hour. I don't know if you have a hard stop or not, Kirsten. It's okay. I do have to go do a summit, some summit planning. So I have a little bit more time but only a little. No worries. So for our audience, if you have any questions, anything you haven't asked yet, please go ahead and send those questions into us. So it doesn't matter which platform you're on, they all come in to us here as well as get broadcast across the others. If there is anything that comes to mind after our stream or if you're watching this not live, feel free to reach out. You can reach out to me or to Johnny. I'm easy to get android.sullivan.redhead.com. But yeah, other than that, I've got, I think there's a couple of questions here. I did want to, so while you please, if you could expand on your question there about ODF encryption at rest, I'm not sure I understand your how cluster encryption without KMS statement. I'm not sure I understand what that means. And then I'm enjoying this, our hope nine and Christian and companies. I'm getting the kick out of this. You guys are making me laugh. It's like, yes, secrets are so secret. Oh, so I did. Kirsten, you had mentioned earlier talking about security through obscurity. So I didn't know if you had any comments on that and our hope nine's made 64 here, you know, brought 13. I think that's yet another sort of little joke, right? Similar to the base 64. I mean, yes, people do it. It's of course not preferred. And in fact, oftentimes we talk on my team about you can't secure what you can't see. And so certainly some people do sometimes rely on obscuring something, you know, actually base 64 encryption for coob secrets is a good example of that. But it's really not particularly secure. So with with ODF, I'm wondering whether the question is really, you know, is does ODF, is ODF planning an integration with external key management systems? Is that what's being asked? Well, we just said that the docs are not clear. I know I can I can pick apart that. And one of the questions that I would have is is it capable of doing per PV encryption? Like can I give a using a KMS, can I give each PV its own encryption key so that way one, yeah. Yeah. So you know what? I think there may be some news on that in the what's next deck next week. And I will, but that might be something I need to check with our colleagues. My recollection is that we're not there yet. But but let's double check on that. You know, and and I will say there are partners, certified red hat partners who do that today. Yes, there are multiple and I'm sure there are some uncertified ones as well that can do it. Our hope nine, yeah, it's it's not what you don't know. It's what you know for sure that just isn't actually right. Who was that? security. Rumsfeld, you know, known unknowns, unknown unknowns and known unknowns. Yeah. All right. Well, again, so thank you so much, Kirsten, for joining us today. This conversation has been fantastic and I think our audience has enjoyed it as well. Security is something that is near and dear to everybody's heart. Whether or not we want it to be, it is near and dear to everybody's heart. Yeah. So, yes, Christian, thank you for plugging your show. I do want to highlight that Christian has a new co-host on the Get Ups Guide to the Galaxy. So everybody who's interested in that, please go and join tomorrow. Say hi and welcome. Welcome them to the show for everybody else. So we will be back next week. Johnny, I still think we haven't figured out our topic for next week. So sorry, Stephanie. We'll figure that out. Don't forget on the day after the 14th is the what's next session. And anyways, thank you so much to our audience. Thank you so much Kirsten for joining us. And Johnny, I will leave you with the last words. Just echoing what Andrew said, this has been an awesome topic. You brought a lot of color in context to security and the interaction from the crowd is fun because then we could have fun with it and actually talk about real stuff too. Thank you so much for coming on. And Stephanie, behind the scenes, as always, thank you. You're awesome. Yeah, pleasure everybody.