 Hi everybody and welcome again to another Tech and Talk. Tech and Talk is a podcast we've started this summer, bringing you some fresh ideas and some deep dives on things, mostly cloud native-y, but from all sort of different parts of the technology that we get to use in our cloud native application. So sometimes we'll sidetrack, but today we're not sidetracking. We're really gonna go for some interesting conversation with first and the newcomer, who's, as it says on the screen, a security strategist and very passionate about security here at Red Hat with me. And she's kindly offered to reprise a presentation she's done called Ten Layers of Container Security, which is rockin' awesome. And I'm really glad to have Kirsten here with me. So I wanna let her do her talk. You can ask questions in the chat. We'll have live Q&A at the end and a bit of a conversation. So without any further ado, welcome to Tech and Talk, Kirsten. All right, thanks so much, Diane. So really looking forward to talking with everyone on this topic. I'm gonna just kinda start not by talking about what containers are, because I am assuming that most of you already have a good idea about what containers are. And if you don't, I know that there's some terrific content already recorded and available on OpenShift Commons that you can, where you can learn more. But what I did wanna call out really is that containers change how we develop, deploy and manage applications. And for that reason, they really affect both the development and the ops team. And there are different advantages for each team. We typically think about containers as really providing advantages for the application development team. You get to package everything up with all its dependencies in one place. You can deploy that same container to any environment you want your application to run and you really minimize the configuration challenges that you have. And it makes it much easier to share your contents and your components. But there's also real advantages for the ops team, right? Container application, it has a smaller, lighter footprint. If you choose to run on bare metal than running containers on virtual machines, of course you can run on both. And they're more easily portable across different environments, whether it's bare metal, virtual as already mentioned, or public cloud, private cloud, wherever you want. That said, it's really important that security be taken into account when you think about adopting containers, especially in an enterprise environment and for the production environment. And as you think about shifting workloads, the security team typically becomes involved and concerned. And so you want to be prepared to talk with your security team and both as an ops and as a dev team about how do you work securely with containers? And that's what this conversation is all about. So we think about securing containers in multiple ways here at Red Hat. We think about both the layers of the stack, of the solution stack, and the container life cycle. So I'm gonna go through each of these elements kind of one by one. So I'm not gonna kind of touch on them now. I'll just mention that you don't really need to be an expert on each of these areas for today's talk. I'm gonna kind of go at a reasonably high level. And if you have a particular area that you'd like a deep dive on as a future talk, let's tee that up with Diane afterwards. We'll be happy to get into more. One other note is that I'm gonna be talking about, these are all things, the security things that I'm talking about are all things that you can do in a DIY container environment. But I'm also gonna use OpenShift as an example for how it's implemented with a container platform and how that makes it easier. But certainly these are things that can be done, both DIY and with a container platform. We're gonna start with the container host and multi-tenancy. So many of our security teams have really gotten comfortable with how to secure VMs and to get a little more concerned when the technology shifts to containers. And so from that point of view, the OS that you deploy your containers to really does matter. You wanna be sure that you're taking advantage, that you have an OS that provides security capabilities and that you take advantage of those security capabilities. So of course, from the Red Hat point of view, that's Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic host. A couple of the key capabilities to think about there and since this is a tech audience, I can get a little bit into these. Linux namespaces provide the fundamentals of container isolation. The namespace provides abstraction, makes it appear to the processes within that namespace that they have their own instance of global resources. So that's a key element to rely on. SE Linux is used to keep containers isolated from each other and from the host. It allows administrators to enforce mandatory access controls for every user, application, process, and file. It's the brick wall that's gonna stop a process if it manages to break out of, accidentally or on purpose, the namespace abstraction. You know, a really concrete example here is that SE Linux provided mitigation of a discovered container on entry vulnerability for those of you who really wanna go there. You can look it up. It's CVE 2016-9962, but SE Linux was able to prevent processes from accessing host content even if those container processes managed to gain access to actual file descriptors. And Dan Walsh did a great blog on this, also something that you can look up. Control groups, again, because you're one of the advantages of containers is you can run multiple applications on a single system and because they're packaged with their own dependencies, you don't have to worry about configuring that VM with what if one application requires a different version of Tomcat than the other. You can do that with your container environment, but you wanna be conscious of being able to limit, account for and isolate resource usage. So that involves your CPU, your memory, disk IO, network, et cetera. SE groups allow you to ensure that containers won't be stomped on by another container on the same host. And then finally, the secure computing mode profile can be associated with a container to restrict available system calls. So really, all of these are security capabilities that you can use for any running process. And in this context, you wanna think of containers as another running process and apply those security features and functions and principles that you would apply to any running process. It's worth noting that, so I'm sorry, another thing to think about if you want to further enhance your security and minimize your attack surface, Atomic Host is the host for you, right? That is a container optimized operating system and it really makes it much easier to it's specifically tuned for containers and really only has the packages that you need to run containers and there is no yum install. So people can't go out there and arbitrarily add different packages to the environment. A further proof point of the security features available with Red Hat Enterprise Linux is that REL 7.1 received common criteria certification, including certification of the Linux container framework support. So all of these kinds of things I've been talking about. And so I'm not talking about the Docker runtime when I say that, right? But all of these surrounding capabilities were the first Linux distribution to receive that common criteria certification for the Linux container framework support. Okay, so that's the OS and multi-tenancy. We also want to think about the content in your containers. Applications and infrastructure these days are really built from existing sources. Absolutely you add your own code, but you're often building on top of open source components such as the Linux operating system for your infrastructure, Apache web server, JBoss Enterprise application platform, Postgres, Node.js. So containerized versions of these packages are now readily available and you don't have to build your own container for the containerized version of them. But as with any code you download from an external source, you need to know where the packages originally came from, who built them and whether there's any malicious code inside them. So Red Hat provides a large number of certified images, including the rail base images, various language runtimes, middleware databases and more on the Red Hat container catalog. Red Hat certified containers run anywhere Enterprise Linux runs from bare metal to VMs and they're supported by Red Hat and our partners. So the container image content that we deliver is packaged from known source and Red Hat provides security monitoring on these packages. So you can see in this screenshot, this is a screenshot of something we, a feature that we added to the container catalog in May called the Container Health Index. So we are publicly exposing the grade of each container image detailing, information you might need about any known vulnerabilities so that you can be sure that you are aware before you download the content of the security aspects of that container. And of course, we keep up to date with security fixes and patches and we will rebuild those container images whenever security fixes are released, as you can see in this screenshot, version 3.4-1315, new vulnerability was discovered, we released 3-4.1316 with a vulnerability fix and the grade went back up. Of course, there are gonna be times when you need content that Red Hat doesn't provide and in those situations, we recommend you use container scanning tools. There are a number of them out there on the market. Many of them use continuously updated vulnerability databases and they will help you identify the open source components inside the container image and identify any known vulnerabilities associated with those components. Some of the scanners will also do commercial software, but a majority of them are really focused on open source content. So you can think about things like J-Frog X-Ray, TwistBlock, Aqua Security, Black Duck Hub, these are some of the things out there. For Red Hat content also, we provide a scanner called OpenSCAP that you can use as well. So you've got a bunch of content that you may have downloaded from a public source, but that's really only the starting point for the applications that you're building, the applications that really drive the business value for your company. So you want to be sure that you manage access to and promotion of the container images that your teams use, both the ones you download, but also the ones you build, just as you would manage access to and promotion of any other type of binary. So you can do that using a container registry or a binary repository that knows how to work with containers and you start to see both terms used in the industry these days, but typically they're referring to things like J-Frog's Artifactory, Sonotype Nexus, binary repository, Docker Trusted Registry, and OpenShift comes with an integrated registry that we call the Atomic Registry. You can use to manage your container images, but OpenShift also integrates with the popular binary repos registries, container registries that are available, the ones that I've already mentioned. So when you think about a registry, you want to be sure that it provides that it enables you to store and see security metadata on your images, that you have access controls, that you can do things like say, hey, this image is okay for deployment and production, but this image, this other image is only okay for deployment in a development environment because it hasn't been fully vetted yet. So you really want policy controls, the ability to apply policy controls in your registry as well. And one of the big values here is that, if you have those kinds of policies in place, or if you choose to have multiple registries, maybe one's only for DEV and for Sandbox and another is only for production, you can make it easy for the DEV teams to use things that haven't yet gone through the full vetting process and explore, but you make sure that they don't get into your production environment unless they've been fully vetted. Similarly, builds are a really important part of the security process, right? As with any application, managing production builds is a key element of your process. You need a definitive build environment for your production container deployment. You really don't want developers doing Docker build on their laptops and then deploying that build to production because that's not a reproducible build environment. So you want to integrate your container build process to use the kind of automated and integrated CI process that you do for other types of applications. And if you're not doing that yet, you should really be looking into it. There are a lot of great tools for automating your builds and doing continuous integration. OpenShift comes with an integrated instance of Jenkins for CI and can also be integrated with your own CI environment if you prefer. And again, if you're doing this yourself, you can put together a process that includes your preferred CI tools. You don't have to have a container platform. We just think it makes it easier if you've kind of got everything in one place. There's less for you to maintain. And you can let somebody else maintain it. So once you've got a build completed, the image should be pushed to a registry just as we've been talking about earlier. And that can be done automatically when you use OpenShift, for example. You also want to be sure again that you proactively check container contents over time. New vulnerabilities are identified daily. So just because your image was vulnerability-free at the time you did your build, doesn't mean it's going to stay vulnerability-free even if the code hasn't changed at all. So again, you want to leverage tools like OpenScap, Black.Cub, JFrogXray as part of your CI process. Just like you would integrate other types of security tools like security static analysis tools like IBM Rational App Scan or HP Fortify. Look that you use to look at your own source code. You want to use these other types of security tools to introspect the container images that you're working with. Another really great thing about containers that actually can help you to enhance the security in a more automated fashion is the Laird Packaging Model. So that Laird Packaging Model, if you work in a regulated environment that requires separation of concerns in some cases, the Laird Packaging Model really helps you do that. You can have your operations team be responsible for the core build, perhaps the rel images that they pull down from Red Hat. You can have your architects be responsible for adding the middleware content. So layering the middleware content on top of that core build image. And then your app dev team really only needs to think about the code that they need to write to develop that application and build that full container. And when you take this kind of approach and you have a fully integrated or automated apologies, a fully automated CI process, you can also look at triggers to help you know when to rebuild containers. So those triggers really kind of span both the container build and the container deployment topic. So I'm gonna talk about them a little bit here. But before I do that, let's talk again about some additional things you need to do around deployment and continuous deployment for security. Let's say you've run your CI process, you've run all your scanners, there are no known security issues with the images that the contents of the images that you've built and you're ready to deploy to production. Well, it may be that without realizing it, you've got a container, an application container that requires that it be run with root privileges in order to do everything it knows how to do. But your security team or your production team has a policy that says no container is allowed to run as root. And in fact, that's a really good security best practice, don't let your containers run as root, just don't do it. So you wanna leverage any capabilities you can to do things like automatically prevent privileged deployment of privileged containers. So Kubernetes and OpenShift come with something called security context constraints, which allow you to do that automatically. You can monitor that, I'm sorry. So you can make sure that the SE Linux context is defined for a container, you can check to see if it requires root privs, if it does, you can prevent it from being deployed, whole sets of things that you can do here. It's also worth mentioning at this point for any of you who are really familiar with SE Linux, a lot of folks are used to turning that off in their base role images because not every application knows how to run in an SE Linux environment. One of the nice things about the containerized application running on OpenShift is that OpenShift manages SE Linux for you and containerizing that application means that you can take advantage of the protections SE Linux provides you without having to worry about those SE Linux elements. So it really gives you the ability to have a much more secure environment. So back to triggers and thinking about image registries. You want to monitor those image registries again to make sure that you become aware if a newly discovered vulnerability shows up. And if it does, you want triggers in place to make sure the appropriate people know about it. So let's take a look at an application that's built using the three container image layers that I talked about earlier. Core, middleware, and finally the application layer. If an issue is discovered in the core image and you've got these kind of monitoring capabilities available, you'll get notification that that core image has a known vulnerability. You also ideally would get notification that a new image is available from the original public repo you pulled that core image from. These are capabilities that OpenShift supports. So you get notification and you can then using these triggers automatically restart your CICD process for that application so that the core and middleware image is rebuilt using that updated core image and then the application is rebuilt using that updated middleware image. And you make sure of course that as part of this the application goes through all of its security tests and it's UAT testing. And now you're in a position where you can deploy the updated container to an environment. You don't need to take down the old one yet. You deploy that new one, you migrate users over, you make sure that everything's up and running good in production and then you can take down that vulnerable container. On the other hand, if it's a severe vulnerability you may decide you wanna stop that container right away while you go through this process. So by leveraging and automating your continuous integration and continuous deployment process with OpenShift the entire process of rebuilding the app to incorporate latest fixes, testing and making sure it's deployed everywhere within your environment can be 100% automated with whatever additional gates and policies you wish to incorporate. Let's talk a little bit more about the container platform. So if you're using, one of the big reasons for using a container platform is the orchestration features that help you manage container deployments at scale, right? You need to, you really want again, automation, orchestration to help manage which containers should be deployed to which hosts, monitoring host capacity, container discovery knowing which containers need to access each other and controlling access to management of shared resources, monitoring container health. So all of these things are typically offered through your orchestration platform, right? And OpenShift delivers orchestration through Kubernetes which many of you know probably was originally developed by Google. So one of the key concepts of a container platform in this orchestration is that it's the master nodes that really have all of this controlling and capability to kind of manage deployment and process around the places that your application containers are deployed. And for that reason the masters have a fair amount of privilege in order to do that work. So you really need to be sure that you have strong multi-tenant security on the platform itself. You want to be sure that you have the ability to isolate different teams, applications or deployment environments from each other. You may want to do something like say isolate a dev, a test in a production environment from each other. That's a pretty common use case. And one of the other things you really want to get out of a container or platform is the self-service capabilities that really make it easy for the dev teams to do what they want to do and really helps kind of make it faster to get those applications out there that again support the business value. So you need to balance what the dev teams need, that nice self-service experience with the ops and security teams needs for really kind of managing security of that production environment. And a really good container platform helps you do that. Some of the other things, and actually let me talk So part of that, part of a secure multi-tenant master making sure that all access to the master is over a TLS, controlling access to the API server, ideally, an X.509 certificate token based, the ability to use project quotas to limit how much damage a rogue token could do. All of things are things you want to be sure you've got. Ideally, you also want to be thinking about a platform that supports image signing, right? Makes it easy as you build those images to sign them. Maybe you use external signing, but you also need to verify the signatures as part of your deployment process. And you want secrets management. So this is a place where the OpenShift has some capabilities today, right? You can mount secrets into containers using a volume plugin. The system can use secrets to perform actions on behalf of a pod. XED is the place where OpenShift stores tenant secrets today. And today, in OpenShift 3.5, that's not, XED is not as secure as many enterprises would like it to be. So this is a place Red Hat is investing. And when OpenShift 3.6.1 ships, we will have secrets encrypted at rest in XED. We're also investing to make it easier to integrate external certificate authorities and vaults for improved secrets management. So there's a lot that can be done in this space already and more to come. And again, you also want to think about integration with the security ecosystem and how well the platform performs there. And we'll come back to that a little bit later. Okay, so network defense, right? Traditional data centers, traditional applications, network defenses is a big part of the way the security team thinks about this. How do I manage? How do I protect my internal environment from the external? How do I segment things on my network? But, and you need to think about this for containers as well. That said, really, we think one of the best ways to do this, given the scale at which containers are coming and going in an enterprise environment is you really want to be using software-defined networking as a way to help automate this. So the first line in network defense comes from network namespaces. And with, okay, so sorry, this is the area that I'm least deep in. So I rely on my cheat sheets here. So, and if you guys want a deep dive on this, I know just the person to have to come in and do a deep dive. So, but like... You haven't been using cheat sheets the whole way because I am totally impressed by that. I use some cheat sheets, I'll admit it. It's a mix. There's a lot of acronyms in there so far just to slay me. I have the one that just popped up. This might be a good time just to ask it. Edward was asking, is there, because you showed us the registry dash health check and that I think is for Red Hat images, for Red Hat Red Hat. Is there something fit into OpenShift or is it all third-party stuff that shows that would have a health page for container images or something similar in the OpenShift registry? Oh, that's, yep, that's a great question. There are a couple of ways to show security data in the OpenShift UI and you are correct that the Red Hat Container Catalog today provides the health index just on Red Hat images and we may extend that over time to the rest of the images that are available from the Red Hat Container Catalog. All the images are supported but the health index is only available on Red Hat content. So, when it comes to looking at security data in OpenShift itself, there are a couple of places that you can see that show up. So for example, OpenShift comes with a tool called CloudForms which is part of the management offering from Red Hat and it helps you kind of manage across a wide variety of deployment environments and OpenShift is one of the ways, I'm sorry, CloudForms is one of the ways you can run OpenScap scans, which I mentioned earlier. So if you need to know the vulnerability data about Red Hat content, you can, you know, and maybe you aren't going back to the image for the Container Catalog for some reason, you've already got that image stored in your registry. You can run an OpenScap scan and if, and there is a UI, I don't have a screenshot in this deck, but there is a UI where you will see the results of that scan and if there's a known vulnerability discovered and you have your policies set appropriately, you can prevent deployment of that vulnerable image. You know, so there are multiple ways to prevent deployment of vulnerable images. The Atomic Registry also has a GUI and I'll be honest, I'll have to look, you know, get one of my colleagues to tell me exactly what's visible, whether the metadata that I was talking about is visible in the GUI, through the CLI, I will look into that. But registries like JFrog's Artifactory Registry or Sonotype Nexus, they absolutely do show you that kind of vulnerability data and many of the scanners have integrations so that you can see some of that data from other third-party scanners in those registries. Thanks, sure. Okay, so back to networks and namespaces, right? So with network namespaces, each collection of containers known as a pod gets its own IP and port range to bind to, which allows you to isolate pod networks from each other on the nodes. And because again, the proliferation of IP addresses and ports makes networking more complicated when you have a large-scale container deployment, we really strongly recommend using SDNs to a software-defined networking to help you handle that complexity. So OpenShift comes with the OBS multi-tenant plug-in, right? So OpenShift platform comes with software-defined networking. You can also plug in other SDNs if you have a preference. An example might be Nuage, would be another software-defined networking that you could plug in to OpenShift. And so, but the real point is that you wanna make sure that if you're doing a container platform that it has the ability to leverage software-defined networking, that it gives you the ability to plug in the SDN of your choice. And again, that really doing this at scale is gonna be very difficult with traditional networking tools. That said, you can also, you know, another thing that comes up frequently with security teams, they're used to using network scanners as part of their protection scheme for the network environment. So what we recommend around that is that many of those network scanners, if they're not yet designed for software-defined networking, tools like those and others like them can be containerized and can be run as super-privileged containers. You just have to make a special policy for them to give them the level of privilege that they need. But it is one of the things to think about. There's actually a lot of capability available in these solutions. And so definitely worth thinking about there. Something else to mention quickly, right, is that there is the ability to control egress traffic using either a router or a firewall method so that you can use, again, more traditional methods such as IP white listing. You might use that to control which users have access to certain databases. And introduced as a tech preview in OpenShift 3.5, there's a new network policy plugin that improves upon the way the OBS multi-tenant plugin can be used to configure allowable traffic between pods. So the network policy allows configuration of isolation policies at the level of individual pods. And this is still in tech preview in OpenShift 3.6, but again, lots of great capabilities here. And this is a space where a traditional security team will be particularly interested. Okay, trying to keep an eye on the time too here, but I think we're good. Storage, right? There's a lot of conversation today about stateless applications, et cetera, et cetera, but there's still a lot of applications out there that need storage. And even if you're doing a stateless app, you might want to store some information somewhere. And so that's really where attached storage comes in. And of course, you need to secure your attached storage. So a container platform with orchestration tools again, makes that easier. OpenShift has plugins for multiple flavors of storage, including NFS, AWS Elastic Block Stores, GCE Persistent Disks, Gluster FS, iSCSI, Seth, Cinder. And the way in which you protect your storage volumes really varies depending on the type of storage. So a persisted volume can be mounted on a host in a way that is supported by the resource provider, such as Read Right Once, Read Only Many, Read Right Many, Block Storage, such as EBS, GCE, iSCSI, you can use your SE Linux capabilities to secure the root of the mounted volume, making the mounted volume owned by and only visible to the container it's associated with. And for shared storage like NFS, Seth, or Gluster, you can manage that by adding the group ID of the persistent volume to the supplemental groups of the pod. So, and then by default, you want to be sure that data in transit is encrypted, which on OpenShift it's encrypted via HTTPS for all components communicating between each other. API management is another thing to think about, right? Again, as especially when we start thinking about, you know, newer applications that are composed of microservices, but also really for any application that has externally facing APIs. You really want to think about how are you going to get appropriate governments of those APIs. And one approach that we'd recommend is using an API management tool. So OpenShift now includes a containerized version of the ThreeScale API gateway that helps you manage API security. So ThreeScale gives you a variety of standard options for API authentication and security. And in addition, you have the option to use application and account plans that let you restrict access to specific endpoints, methods, services, and apply access policies to groups of users. One of the cool things about application plans if you use an API gateway is that they allow you to set rate limits for API usage and control traffic flow for groups of users. So this is a way that, you know, and you can also automatically trigger overage alerts. So that can sometimes be a good indication if you get a sudden spike in usage, you know, you want to be able to stop that quickly. It could be an indication of an attack. So if you're getting alerted about overuse, you know, applications that reach or exceed rate limits, gives you a great indication that it's something that you need to look into. So a lot of good things to think about there too. Okay, federated clusters. This is actually a futures looking, or a forward looking slide. And something that Red Hat and the Kubernetes community are working on. Federation is useful for very large scale, high availability global deployments that require multiple clusters and multiple availability zones. You can use federated clusters to, the concept is you can use this to manage more than one data center in different regions, but also, and that could be with the same public cloud provider or federated clusters might, will also help you manage across different public cloud providers. So there's some really cool stuff coming here. And if you're managing federated clusters, you need to be sure again, that your orchestration tools have the security you need across those different deployment platforms. So some of the key elements that are in the works here are federated secrets, giving you the ability to automatically create and manage secrets across all the clusters in a federation, making sure that the secrets are kept globally consistent and up to date, even when some clusters are offline and federated namespaces, right? Creating namespaces in the federation control plane to make sure they're synchronized across all the clusters in the federation. Once again, this is forward looking. This is something that's in progress. There are elements of this in Kubernetes today and Red Hat again, is working with the community to get these ready for enterprise support. I mentioned the security ecosystem earlier. So again, if you're a security professional or you're talking to your security team, there's a whole series of security tools that the security teams are already using today or that they really want to be sure can be used in a containerized environment. And so some of these include special identity and access management tools. Of course, most of us think about things like active directory, but there are also tools that are focused on privileged access management and security teams often use those tools to make sure not only that they're controlling privileged access, but that they can audit it. I mentioned earlier, external certificate authorities, external vaults and key management solutions, the container content scanners that I've talked about as well as other types of vulnerability management tools. There are also some container runtime analysis tools, both Aqua Security and Twistlock. Do some container runtime analysis, those are useful. And again, you want to be integrating with your security team's information and event monitoring system. So you want to be collecting the logs from your container deployment, both the platform and the applications, but in particular the platform is probably the most interesting thing to your security team. You want to flow that information into that information and event monitoring environment so that it can be aggregated and viewed in the tools that the teams are used to. So again, you want to be sure that as you look for really, really look to product ties and broad scale deployments that you are thinking about solutions that help you do this kind of integration. So as said, we think of course that OpenShift is one of the best ways that you can do that. We support all the kinds of capabilities that I've been describing. And in addition, of course you can do this yourself by using some of the open source community tools. We think you get that added value from OpenShift of having it all in one place, having it be supported and knowing that you've got a stable enterprise partner to work with and help you manage all those pieces together. Diane will have a PDF available of this with some links to some additional information, including the 10 layers of container security goes into kind of a lot of the content that I've covered today and in some cases in more depth. And I would be happy to take questions. All right, well, there is one more question here and really thanks. Because first, thank you for doing this because one, I do lots of talks on each individual aspect of security. And just last week, we did something with New Vector who was doing network security and SDN, like checking out the network layer. And I think this is the first time that I've seen someone do a presentation that covered enough of each of the different areas so that you actually understood how they were all interconnected and necessary. So this has been great for me because it sort of pulls all of that together. And I mean, I really appreciate it. So thank you for that. Edward is asking a question you mentioned like you mentioned Twistlock and Aquasect and JFrog and a few others as well. But he's wondering if you have any thoughts about Sysdig, Falco, or other utilities to help monitor, you get to play with all of them so in your role. Those are good questions. And I'll be honest to say, I'm not as up to speed on Sysdig as I'd like to be. And I'm not sure if I've heard of Falco before. Is that FALCO? Yeah, that's what he's saying. And I'm going to unmute him so he can ask his question. He can unmute himself too. And let's see if we can get him to do this. Where is Edward? There we go. Steve, Edward, Edward, how about if you ask your question? Okay, yeah, sure. Thanks for this, Kirsten. Yeah, we're looking at some utilities to help monitor containers, running containers out. So we've kind of found the utility from Sysdig their utility that's sort of like a TCP dump, but they also have another product called Falco, which allows you to set up a policy and a learning. So if Shell gets spawned up in a container, that can be a message that alerts a team or so. Okay, interesting. Well, I will definitely make a point of learning more about those. It sounds to me that Falco at least would be complimentary with something like Black Duck Hub or JFrog X-ray. Twistlock and Aqua Security have two solutions that, and one of which may be an interesting one to explore. So both of them have scanners and then container runtime analysis. And at least my understanding of how the Twistlock runtime analysis works is that it takes a look at all of the information in the Docker file and everything. Kind of when it takes a look at the container image before it's launched, it collects a bunch of config info. It also monitors kind of the initial behavior of the application in the container when it's first launched, so that it gets a sense of what the pattern is for the container behavior. And then if it sees deviations from that pattern, it would also notify. So I don't know how that compares to the way Falco determines when to notify you, but I will definitely look to get up to speed on both Sysdig and Falco. So I know that I had the Sysdig folks on before in an OpenShift Commons briefing, so I'll post a, Edward, I'll send you an email if you share your email with me with some links to that. But about three weeks ago, the AquaSec, Liz Rice did a tech and talk and she's also done a really interesting open source project under the AquaSecs. It's in their GitHub repo called KubeBench, which takes the, I'm gonna say it correctly, the Center for Internet Securities, Kubernetes Benchmarking tests and automates them to run against your Kubernetes, a really interesting project. So you might wanna take a look at that, Kube-Bench is something that's also out there and you could possibly take a look at. And on no new vector is also using those security benchmarks too in their offering. So the thing about security, there's so many angles at it, but it's interesting to see the different approaches people take and your question earlier, Edward, about the dashboard stuff and I'd love to see a visualization even if it's a third party one, baked into OpenShift as well, so that it was alerts. And that's part of the thing I can, normally in tech and talk, I'm not so OpenShifty, but one of the things is OpenShift can't be everything to everybody. So we really love working with these third party folks who are experts in these things and integrate very easily with OpenShift and Kubernetes. So there's lots of content out there. So please. Yeah, absolutely, a comment on the Kube-Bench, the CIS benchmark, my best understanding is that you use those benchmarks really for making sure that the overall deployment environment is secured after the platform and the environment and that something like Aqua Security, Twistlock, and it sounds like also Falco are more about the behavior of the application. Edward, is that right that Falco's monitoring the application behavior? Yeah, correct, it's a kernel module that gets loaded and based on, I guess, a pattern or a call, you can have it alerts based on that event. Interesting. So I'm definitely going to look into that. Cool, thank you. Love it when I learn something new. Yeah, I absolutely adore hearing a new one. So, and also I will try and get someone from the CloudForms team to do a talk on the OpenScap scanning and integrating that to come time and not to this in the future. I know just the person for you to reach out. I remembered reading a blog post somewhere and I found it and I popped it into the chat for folks. If you know the person, that would be great. Well, I was going to suggest it might be a different person, but go ahead. But who would your person be? Lucy Kerner does an awesome presentation on using not just CloudForms, but CloudForms, Ansible insights to automate the configuration of the initial configuration of security profiles. So using OpenScap and then to audit and automatically remediate as well. So she's got some great knowledge in this space. Yeah, I've heard her name, Vandy, but I think I've actually met her twice in the hallways at some conference because that's where I meet everybody from Red Hat. Never add a responsibility. It's always at conference. You know, yeah, you work for Red Hat? Oh yeah, yeah, cool. The other thing, I'm always looking for suggestions on other people to have on and the tech and talk on people who are passionate about some aspect of technology. Do you have any other suggestions, person? Yeah, you know, I do have one other person in mind and I just, but I probably, I did not get a chance to connect and see whether, you know, she, this is someone, I guess I can share her name. She's presented at other events publicly. So, Alex. Boris them. I'm sorry? Say again? Boris them. Yeah, no, I can put you in touch. But Alexandra Schulman is a VP of an innovation center at Citigroup and she's a great person. And of course, the people I'm gonna think of are in the security space, but she's a great person. She does some, to talk about the implications of security, you know, for, you know, cloud native apps and as things change, you know, what are the angles to think about there? So she's a possibility. Perfect, all right. Well, I'm also hosting an event in December, the OpenShift Commons gathering and looking for speakers for that. And I've been trying to find a good security speaker. So that might be a good slot to try and force her into doing something there because KubeCon's coming up December 6th, 7th. And we always do a big OpenShift gathering the day before. Sort of think of it as a prep session for KubeCon and get all the updates before you go in. So then you can go loaded with questions. So we'll try and see if we can reach out and get some of those folks there. So, Kristen, thank you so much for taking the time today to give this talk. It really was awesome. I can't thank, you know, I'll add in the links that you had and post this as a blog post along with the slides on blog.OpenShift.com and tweet it out as well. And it'll be uploaded on YouTube under the technical playlist along with a lots of other. So again, thank you very much. My pleasure and thanks for the good questions, especially from Edward. I know there may have been others, but thank you.