 Hi everybody. It's nice to see people in person. I'm sure everybody's been saying that. I'm Kirsten Newcomer, Director of Product Security, Security Product Management for the OpenShift team, and here to tell you a little bit about what we're doing with OpenShift and security. So just a quick comment, some of the ways that we contribute to the community, participation in the Kubernetes security sig, the API server sig, Kube auth sig, CNCF security tag, Tecton chain sig store, which you heard a good bit about from Luke earlier, Key Lime, Spiffy Spire, and just also from here talking a bit about what's driving our security investment. It was a great pleasure to listen to Yasmeen. She is the Saudi Aramco is one of the type of customers who need the security investment that we're making. They help to drive our thinking. So appreciate seeing that presentation earlier today. The approach we take within Red Hat towards security is build security capabilities into our offerings to enable a continuous and holistic approach to security. Security throughout the application stack and throughout the application lifecycle, built in, not bolted on, enabling our customers to meet their security posture and regulatory requirements. That said, we also want to deliver business value that goes beyond just security protection. We want to make it easier to prioritize the security tasks that you have, better workflows, better feedback loops between developers, operations, and security teams to really kind of try and help people reach that nirvana of DevSecOps. Improving data quality, remediation, simplifying adoption of security practices by automating those, providing them out of the box so that you don't have to rely on a whole set of security experts, enabling more informed risk management and collaboration and more things up here to think about. So as we look at Platform Plus and what I'm going to be talking about today, I just want to kind of a quick reminder of what's in Platform Plus. So that includes OpenShift Container Platform. It also includes Red Hat Advanced Cluster Management, Red Hat Advanced Cluster Security, Red Hat Quay, and OpenShift Data Foundation Essentials. And as you can see, really working to kind of provide a complete platform for app developers. And also when it comes to containers in Kube, I mentioned DevSecOps earlier. The reason I do that is, most of you here, I'm sure, are aware, but I still run into teams, security teams in particular who don't realize this yet. In a Kubernetes environment, best practice never patch a running container, right? If you apply a fix to a running container and that particular instance of the image goes down, Kube or OpenShift, they're going to redeploy from the image itself and you're going to lose whatever fix you did. So it's really important that organizations design for automated build and deployment as much as possible. You don't have to automate every step, but you need to be organized in a way that allows you to easily do those updates. So when we think about DevSecOps, kind of the capabilities fall into three buckets that we're delivering with Platform Plus. That's capabilities that help with controlling application security, features that help with protecting the platform, and finally, features that help detect and respond to runtime threats. So I'm not going to walk through all of these. I'll just mention that the white boxes, these are the capabilities available with OpenShift container platforms, security capabilities in these three buckets. Blue is Red Hat Advanced Cluster Security. For those of you who haven't heard of ACS before, that came to Red Hat through our acquisition of Stack Rocks a little bit over a year ago. You'll be hearing more about ACS and Stack Rocks in this session later today. And then the red boxes are ACM, ACM being very strong, both ACM and ACS, doing multi-cluster capabilities, multi-cluster management, multi-cluster security. So step into the roadmap, kind of three key categories for our security roadmap. Identity, integrity, observability. So again, supply chain security has been a very hot topic certainly since the solar winds breach. A lot of great work has been done in the community. I was really, you know, I had an opportunity to listen to solar winds talk about how they are using, this was in October of last year at KubeCon North America. Solar winds talking about how they're using SigStore and Tecton chains to improve their particular, their own internal supply chain, more on that later. Very proud that Red Hat is one of the founders of SigStore, started that project a lot going on there. You heard, if you were here this morning, you heard Luke talk about what it means to have keyless signatures and how that makes it much easier to integrate signing capabilities into a CI CD pipeline. I have, for many years, I've been with Red Hat for about six years. I've had customers ask me about the ability to add signatures to their custom images. It's a popular request. It has not been easy to achieve. SigStore is a real leap forward in this way. Tecton chains, and actually I should say that Tecton chains has been, now is now shipping tech preview, I believe, with OpenShift. You heard earlier that Quay supports storing cosine signatures. Now, there are a couple of things on here that will a little bit further out. Encrypted containers. When we think about managing for trust or a zero trust environment, being able to store and then run encrypted containers is a piece of the puzzle we're working on. Also, rootless builds. As Meen mentioned, Podman meeting privileges in the environment today. We want to make it possible to leverage components like that to do builds in an OpenShift environment without meeting privileges. That's work in progress as well. When it comes to the platform integrity matters, especially you heard the micro shift presentation and the conversation about edge that's happening there. Edge devices in particular, or single note OpenShift environments that are in malls on devices need more control or need more security, I should say. They're subject to physical theft. They're subject to physical tampering. Being able to go beyond secure boot to remote attestation of an environment at the edge is also an area of investment. This is starting with REL-9. A tighter integration with the integrity measurement architecture as we build REL-9 binaries, those will all have signatures stored in the kernel enabling remote attestation. Then we will pick that up in OpenShift for REL-Core OS when OpenShift moves to REL-9 binaries. We've also been working upstream with the community to get Kubernetes to add support for username spaces. Our container runtime, Cryo, already supports username spaces. This allows you to run a container as root within the container. If there were an escape, the process would not be root outside of the container. The Kubernetes community has not yet added that support work in progress. There's a KEP, one of our key engineers has been actively working upstream on that, Derek Carr. Again, that's just needed to be sure that all the elements of Kubernetes can work with and understand the runtime features that are already available. Trusted execution environments. This is tied to confidential containers and some of the work we're doing that's upstream, that's based on Cata. Being able to create sort of a Kube-native enclave as it were that's tied to a hardware environment to really create that confidentiality. Again, work in progress. Then we're going to talk in some of these future further slides here about investments and observability. A common phrase in the security field is people talk about security by obscurity, but that really isn't security. You can't secure what you can't see. Observability is really a key part of the security story here. I'm not going to spend a lot of time on ACS, a couple of things on this slide or ACS, because Connor Gorman is going to do a deeper dive on that later. Again, as I think you saw earlier, we're really focused with OpenShift4 these days on providing standardization capabilities across multiple clusters. We find that more and more of our customers are using multiple clusters for many different reasons. Yes, Meme gave us some of hers and they need a consistent way to manage configuration, governance, and security across those clusters and storage. Wrong direction. With ACM, one of the key areas of improvement for ACM in the security space focused on improving secrets management, making it easier to do management of secrets across various elements of solutions that you're using or secrets for your applications. But also again, if we circle back to SigStore, we want to think about not just signing container images, but ACM's job is to ensure consistent configuration across your fleet while giving you the ability to have some customizations of your different fleet. All of that work is done in YAML. All of those configurations are stored as YAML files. You want to be sure those YAML files aren't tampered with. So digital signing of your YAML files helps to ensure that this is done with an integration called Integrity Shield and again is using cosine signatures as an option there. Also an update to ensure that ACM can leverage cloud keys and tokens when managing deployments. So again, how many of you have heard of advanced cluster security? Only a handful. Well, not too bad. Maybe about a third 25%. I don't want to steal Connor's thunder. But again, like ACM, ACS works with OpenShift, also with GKE, EKS, and AKS. So both multi-cluster environment helps provide both runtime security as well as the ability to integrate security capabilities into your pipeline. We'll talk a little bit more about some of those as we go. So most of ACS content and roadmap is going to be covered by Connor, so we'll just kind of keep going here. When it comes to compliance, Red Hat has had a long history of providing the ability to automate compliance with technical controls that are tied to security and regulatory frameworks. So REL has OpenScap. OpenScap stands for Security Content Automation Protocol. OpenScap is a NIST-certified scanner you can use with your REL servers to configure and check for compliance against things like PCI DSS. We've taken the same approach with OpenShift. So the OpenShift compliance operator is available with any OpenShift subscription. We're shipping the profiles that you see in the available now box today. You can run those profiles on any OpenShift cluster. You'll get a report back as to which controls are compliant and which are not. If you wish, you can automatically, you can rerun and remediate your cluster, applying those technical controls. You can also tailor your profiles. So for example, the CIS OpenShift benchmark, and CIS has moved to having benchmarks per Kubernetes distribution now, so there is a CIS OpenShift benchmark, recommends encrypting at CD, that's at CD data store. Not every customer chooses to do that. Some people feel it's sufficient to encrypt the underlying storage rather than at CD itself. And so the compliance operator gives you information but also a way to tailor to your needs. Additional profiles are planned. For those who aren't familiar, AFISMA is a subset of the NIST 853 controls that the U.S. government requires for certain use cases. Essential 8 is an Australian profile. NERC is also U.S. DCI DSS financial kind of mostly around the world. Both ACS and ACM are integrated with the compliance operator to give you a visualization of compliance controls. ACS also provides workload compliance assessments as well. One of the new things we're working towards with the compliance operator is we're going, we're working to support that on the StarKS clusters that I mentioned earlier, EKS, AKS and GKE. And we'll also be adding profiles for the CIS benchmarks for those particular cube distros. So work in progress there. Observability, a lot of investment in observability in this year. And so a range of things, right, distinguishing between workload monitoring, user-defined projects, improving Thanos and Prometheus support, and extending the ability to use remote storage. One of the interesting, you know, conversations at the cloud native security con across the hall earlier today is how it can be, well, many of us in this room, right, are sending, collecting logs and sending them to a CIM, a security information and event monitoring system for processing. That can be sort of somewhat time-consuming. We've been investing in multiple ways to do that, including event streaming as a way to forward your logs. And then, of course, visualization matters, right? Now this is, again, you know, mostly about monitoring. Well, you'll hear from Connor later about ACS visualization in the security space. But networking visualization has also been a key area of investment. So traffic monitoring, this ties back also to regulatory requirements. So traffic metrics, tracing, we added, I forget how long ago, three to four months ago or so, ingress logging to our set of logs, network policy and governance, right? So folks are using, how many of you are using network policies on your clusters? Okay. And if you're not using network policies, are you using server mesh, service mesh? Anybody using service mesh? One. Okay. Are people using network policies and service mesh together? One. Small. So still low, still slow adoption of service mesh. That's interesting. So network policies then is your key way of ensuring that the traffic is only going where you expect it to go. So this is one of the places where also, you know, right now it's not the simplest thing for folks to configure and manage. You can, of course, deploy a policy that sort of says by default all of the pods in a single OpenShift project can talk to each other, but they can't go outside that project. But if you want to do something that's more interesting or more involved or perhaps more controlled, that can take some additional work. So there are a couple of places where we're going to, first of all today with ACS, you can get network policy visualization. So you can see what the traffic pattern is that's been implemented in your network policies. ACS will also automatically recommend ways that you can tighten network policy controls. It'll give you the YAML. You can simulate what that looks like in ACS. And then you can use ACM to deploy those policies wherever that application runs if you wish. We're also investing in some shift left security there. I'll let Connor tackle that. But in addition to what you get with ACS network visualization, we think this is an area that's really important and we're adding this into the core platform as well. The visualization, the auto recommendation, that's going to stay with ACS. But this visualization and a better editor for managing network policies are on the roadmap. Multi-cluster gateway, again, more and more we're seeing multiple clusters across our customer environment. Some of them are on-premises. Some of them are in the cloud. So having an opportunity, having a way to manage ingress and egress from a single gateway becomes more important. I'll just mention IPv6, single and dual stack, EBPF. That's something that we're investing in around network conservability. EBPF is already something that ACS is using to do additional system data collection and most of the rest of these are a little bit more performance oriented. Then of course, you know, if you have applications that may need to talk with each other or across clusters, cross-cluster networking is also important. So at the moment that's the work is being driven by ACM with Submariner. Strong investment there as well. And a little bit more about mesh. So again, I'm sorry, if I go back to Submariner for a minute, that also will enable some IPsec tunnels or, okay, networking mesh, multi-cluster mesh gives you IPsec. For you all, does IPsec matter in terms of North-South traffic in the cluster or any of you using IPsec? Not so much yet. So mostly it comes up, for me, the customers I hear it from are in the telco space where the three GPP regulations tend to require this. And again, service mesh with Federation. It sounds like that's a little bit, you know, that's something you all are still thinking about whether to go to service mesh or not ready yet. But when you're ready, we'll have multi-cluster service mesh there as well. And then of course with multi-cluster storage, OpenShift Data Foundation Essentials comes with OpenShift Platform Plus, upgrade to OpenShift Data Foundation, full enterprise, a couple of things you get. We already have the ability in ODF to encrypt per PV, per persistent volume, additional investments being made there for encryption on multiple levels, out-of-the-box replication, disaster recovery, right? Some people think of security fairly narrowly, but recovery is a key part of your security story. With Quay, again, one of the things we'll be working on is we've already added, thanks to Daniel, have added support for storing cosine signatures in Quay. ACS has a scanner. So when we acquired Stackrocks, the Stackrocks scanner or the ACS scanner, it's built from a fork of Clare. We've been improving Clare over the time that since that fork was done. So we now currently have two Clare-based scanners. We have Clare V4 and we have the ACS scanner. We'll be working in this year to get to one Red Hat Clare-based scanner that has language-based, you know, kind of has the best of both worlds, right? So the ACS scanner today can do Java, Node.js, Ruby, Python. We'll be working to add Golang support for our common scanner. But one of the things, like if any of you ran into log for shell, if you're doing Java, ACS was able to find and identify the log for shell vulnerability as well as spring for shell. So if you haven't had a chance to check it out, definitely worth your time. Back to supply chain security. We actually have many features within the product today that you can leverage to design a pipeline that will get you stronger security. So let's start with building that actual cluster, right? How do I deliver my configuration and my policies for the cluster itself? I can use that with GitOps and with ACM. I can do that for my apps with Argo CD if I'm not using ACM or I can do both together. Integration with secrets management tools, as mentioned earlier, and a CLI coming for bootstrapping the GitOps workflows. For my custom apps that I'm building in an OpenShift, in an OpenShift environment, apologies, this is a slightly old slide, I can actually build a cloud, a kube-native pipeline that has enough security gates built in that I can meet SLSA level two with the features that are coming or are already available. And I don't know if folks are familiar with what's known as Salsa. SLSA Salsa is a set of standards for how to ensure that you've got a secure supply chain. So you can use OpenShift pipelines, basically Tecton. You can use code-ready containers to do a dependency scan in your IDE, I'm sorry, code-ready dependency analytics, with Eclipse Chase, supports IntelliJ, VS Code, plugins. You can use ACS to, well, you build your image with build tools in OpenShift, you store that image in Quay, use the ACS scanner or the Claire scanner to scan that image for known vulnerabilities. Vulnerabilities are not the only thing you need to think about here, however. So ACS also will analyze configuration data, deployment YAMLs, helm charts for things like excess privileges, misconfigurations, embedded secrets. So you really want to have both the vulnerability scanning and the app config data. This is an important part of shifting left and DevSecOps. If you can give developers information about security issues before the system, you know, before you hit deployment, they're going to be much more likely to address and fix those issues, right? Give them, and ideally give them the information and the tools that they use every day. So once I've got that ready to go, I can now, you know, TectonChains is now a piece of the puzzle. I can attest all the steps in my Tecton, in my OpenShift pipeline. I can use the signing capability with TectonChains as part of that attestation. Again, store my final image in Quay, store the other data and leverage GitOps for that deployment. There are a couple of things coming. One is ACS, we're adding an admission controller to validate image signatures. It's going to start with validating co-signed signatures, and we'll expand that over time. So really strong commitment to helping you produce content that you feel good about, that you have security in your supply chain. We're also working to develop what we're calling a pattern or a validated pattern that will give you a push button way to deploy an out-of-the-box Tecton OpenShift pipeline with these security gates in place. That's something we don't get it validated until we get a partner to participate. If any of you are interested, please let me know, kind of looking for folks who are going to help us try that out. Okay. So one of the other big initiatives, and I was not here this morning, so I don't know how much we touched on Hypershift this morning, a little bit. Okay. So for me, from the security perspective, what I really like about Hypershift is the separation of the data plane and the control plane that this provides. That is not something that has been available in the CUBE community yet, and it is a request that I hear from our more security-minded customers. So again, having that ability to have that separation really enables a whole series of use cases that we've not been able to address before. Again, and reduces spend, but also gives you network trust and segmentation, and again that multi-architecture support.