 I'm Connor Gorman. I've worked at Stackrocks and now ACS via the acquisition for the last five years, so quite a while. I'm an engineering lead, and I'll walk you guys through what we're working on. So very quickly based on the hands raised in Kirsten's presentation, I want to just walk through quickly what ACS is, what it does, and why. And so our goal is really to build the and we built the first Kubernetes native security platform, covering both build, deploy, and then run time. And so starting with build, it's really around securing the supply chain, vulnerability scanning in CI pipelines, actually scanning configuration YAMLs in CI pipelines, giving as much feedback to developers as soon as possible. Then we're looking at the deploy stage where you have running containers, running pods, and running deployments, and what you're trying to do is continuously monitor them and continuously evaluate their vulnerabilities. I always like to say because images are immutable, the number of vulnerabilities you'll find only ever increases. There's only CVE 2022 coming out, CVE 2023, if you're running images for a long time, the number of vulnerabilities only increases. And then finally, from the runtime perspective, what's running in your pods? Is it what you expected to run? And then how can you take that runtime information and bleed it back into the way that you configure your applications to make them more secure? Along the way, we integrate with all of the tools that we know and love from image registries, image scanners, we have our own like Kirsten mentioned, different CI CD tools to make it easy to integrate, DevOps notifications, generic web hooks, and then SIMs. And so it was kind of the whole breadth of integrations that we have, and our goal is to secure all of Kubernetes from the bottom up everywhere. And so we've been pretty Kubernetes distribution agnostic, including obviously OpenShift, but also many of the Kubernetes distributions from the large cloud provider. So from our perspective, we have four key priorities that we've already been working on and will continue to work on in 2022. The first is open source. The second is security innovation. I don't think the Kubernetes security landscape is finished. I've been working on it for five years. There's probably another five years left in it. How can we continue to expand this ecosystem? Portfolio integration, now that we're part of Red Hat, how can we all work together and provide a better solution for all of you? And then finally, the ability to run a world-class cloud service and managed service to allow everyone to use ACS as easily as possible. So we did it. We're open source. This has been quite the journey. It's been a little bit over a year. As of the last two months, we've been open source. Find us at StackRock, StackRock's on GitHub. We're also in the CNCF Slack under the StackRock's channel. I heavily encourage everyone to jump on there. If you have questions, we've got people ready to answer them, myself included, and then also, you know, feel free to file issues, ask questions, request features. We really want to build a community around this project that I've spent a lot of my professional career working on, and I'm really excited about sharing it with you all. So touching on the first portion of open source, unfortunately, clicking the make public repository public on GitHub is not the end of open source, right? And so just for some context, the StackRock's name is used for the open source and upstream project and Advanced Cluster Security or ACS. It's a downstream project. You can find it on stackrocks.io. We'll take you right to our community page. As Kirsten mentioned, the combining of Clare and Rscanner, which is a fork of Clare just earlier in the life cycle, right? We want to contribute a lot of that code upstream. We've made a lot of changes over the years. Also, we have relied on some Falco Cystic libraries. And so you want to contribute back to that community now that we're open source as well. We had our own open source project called KubeLinter, which helps you lint yambles from Helm or Kubernetes yambles in CICD pipelines. And we want to extend that and continue to invest in that area. And then finally, overall, invest in the community around our project. I'm pretty new to open source. I'll be honest. And so I want to continue to enable a community to contribute to our project. And any ideas and suggestions you guys have are very welcome. Cool. From a security innovation side, right, there's pretty much an infinite amount of things to do. I'll be honest. So we had to pair them down to kind of four key categories. One is within vulnerability management, which is always a hot topic and with software, supply chain security, even a hotter topic. And so we really always want to focus on things that are actionable. And so this first topic here around identifying unused software packages is really around helping you prioritize vulnerabilities that you may have in your images and ensuring that you can like mitigate them in any way possible. So a lot of times there's different packages and base images that may have vulnerabilities, but you never use them or never load them or they're never used in your application. At Stack Rocks, a lot of times we have static go binaries. The only thing that's supposed to run that container is a static go binary. And so vulnerabilities in other packages that may exist due to a base image, for example, may not be relevant to us. So we want to make sure that we give you actionable information along those lines. I think Kirsten really mentioned validating image signatures. We have the ability within our product now as a part of an admission controller to gate whether or not images enter your environment based on whether or not they're signed correctly, based on the signatures that you know. And so we're continuing to expand that functionality as well. And then from a bottoms up approach, we want to provide better host level vulnerabilities scanning. Kubernetes and the entire ecosystem is really about from the node up, the nodes and pods, and then also the security of Kubernetes itself in terms of vulnerabilities that may exist. And so you really want to branch that entire lifecycle. And the next step for us is really around host level vulnerability scanning. And then finally, I want to make vulnerability scanning easy for everyone. So as a developer, my ideal flow is that I build an image locally, I scan it locally, make sure it looks good, push it to CI, it gets scanned there again, deploy it to production, it gets scanned there again, and then we're continuously scanning it. So vulnerability scanning is not a one-time thing. It's a continuous process. And so from my perspective, I want to do that locally. From a policy management perspective, we run policies against all of your deployments and configurations. We want to help you bulk them together. So how can you move them between different instances of stack rocks? You may have air gapped environments. How can you group them logically? And then finally, excuse me, how do you take your gatekeeper policies and integrate them with stack rocks from the open source? Go ahead and take a second there. Lots of talking. From the network policy perspective, I know that Kirsten also talked about these briefly. It's really about how can we help people leverage network policies. I think the adoption of network policies has been a little low. I know that a lot of these features, four years ago, I was trying to build a network policy and I thought, man, this is really hard. Like, I can't figure out how to do this. How can we make this easier for people? And so now the next changes that we're trying to make from this regard are around how do you identify applications that don't have network policies applied to them. I think we all know and use Kubernetes labels, not the easiest things to use, easy to get wrong. How can you identify things that are not being properly attributed via labels? The next two are really around usability with respect to UI and UX and getting you out of the YAML stage and into actually using a UI. The first place I like to start is my building a network policy is, how do I have this namespace of tenant A talk to namespace of tenant B? How can we help customers do this and quickly segment them? And then finally, security metrics and trending. How do you know that your security program is working? How can you look at things over time and ensure that the number of policies that are being violated or the number of vulnerabilities are being reduced? Awesome. So as I mentioned earlier, it's really about integrating with overall Red Hat portfolio post acquisition. The first one is really around usability. How can we make sure that we can scan OpenShift local registries? Our entire architecture is based on a control plane and then you have secured clusters. We have some customers that have over 300 secured clusters. If you have any registries that are local to those clusters, we need to be able to scan them and also pull the images out of those and upload them to our centralized control plane where you get a single pane of glass to look at all of them. Along the same lines is within scanner and ACS and ACM. This is really, in my opinion, a better together story. We've all built a bunch of amazing features. But how can we provide those features in a very unified way and also a way that is visible to you right through your OpenShift console? Really, how can we unify all of the amazing technology that we've built to provide them in a super unified and straightforward manner? And then finally, via a Submariner, how do we build security for cross cluster networking? Currently, we are confined to a single cluster. So this is the overarching goal and something that I'm very passionate about and have worked really hard on so far is really how do we provide all of the things I just talked to you about in a way that's easy for you to operationalize. This is where ACS as a managed service comes in and ACS as a service. And so our goal is to take the control plane, manage it fully for you, and all you have to do is secure stateless clusters. So the goal is really to get people up and running as fast as possible. Again, we're going to be mostly Kubernetes distribution agnostic. You can run this in different cloud providers. You can run this on OpenShift. You can run it in managed OpenShift. It'll be fully managed by Red Hat with an SLA. And then finally, the end goal here is to provide flexible consumption models. And that's all I have for you. Thanks, guys.