 Sorry for that rough intro everybody, but let's go ahead. Hello and welcome back to another OpenShift Commons briefing. I am your host today, Karina Angel. I'm one of the OpenShift product managers, and we have a very special guest today. We have Oleg O'Shaughn, who's co-founder and CTO of StackRocks. I'm sure you've heard hopefully by now about the intent to acquire StackRocks by Red Hat, and StackRocks has been a leader in the container security market and a great partner to Red Hat. So, thank you so much, Ali, for being here with us today. I'm really looking forward to hearing you talk about StackRocks. Yeah, thanks for having me. We're all pretty excited about this. All right, let me share my screen and I will jump in. Okay, I feel like I have to ask you a question. Can everybody see my screen? Looks great. Thanks. Okay, great. So, I'll do a quick high-level overview of what StackRocks does and who we are. I have a few short slides just to sort of set the context, and then I'm just going to jump into a product demo and sort of do a deeper dive into use cases and how we think about solving customer problems and DevOps and developers and security teams problems. So, StackRocks at its core has been built around cloud-native security. Our premise is that you have to be able to secure the entire lifecycle of an application. That's from the moment the building has happened to when you're deploying things to when you're running things. And the core concept for StackRocks is Kubernetes-native. So, what we focus on is how do we leverage the infrastructure, how do we leverage the underlying Kubernetes constructs and OpenShift constructs to be able to apply a lot of the things that have typically in security been bolted on versus building in. So, that's another really core aspect of what we think about when we think about cloud-native and Kubernetes-native, which is the concept of building things in versus bolting them on. So, at a high level, when we talk about what are the benefits of a Kubernetes-native approach to security, these are really the three large pillars we constantly think about. One is lower operational cost. So, the lower operational cost really centers around the fact that a lot of the typical policies or languages, or even rules that have been written or built by security products, they typically either get built in some proprietary language or get implemented using the tool itself, using a third-party implementation. However, from us at Stack Rocks, when we think about Kubernetes-native and how do we lower the operational cost for users, it's how do we align developers, devops, and security teams to be able to use a common language. And the common language are really all these components built into Kubernetes. So, as an example, and I'll show you this later on in the demo itself, is when we actually build micro-segmentation rules or firewalling rules. We use things like, for example, network policies in YAML, and while we develop them, we actually enforce those through Kubernetes itself. At the same time, we actually allow that to be merged back into the code base, so you can actually build it and append it to the application. So, you're inherently hardening your applications and your infrastructure, versus, for example, building an inline proxy or some other tool that is terminating MTLS and trying to enforce things like DPI or segmentation or firewalling. The other part of it is reducing the overall operational risk. So, if you build things into your infrastructure, you're inherently probably building some trust into OpenShift and Kubernetes to handle this entire layer for you. So, reducing operational risk means leveraging the already existing components of your infrastructure, like, for example, network segmentation rules that come with network policies or admission controllers or your existing CICD to implement a lot of these particular operations. So, not only does it become native, but it doesn't break your existing workflow, and because it's already using your infrastructure to implement it, the risk overhead is substantially lower than potentially using a third-party tool that could fail or cause a conflict with your existing first-class citizen, which is potentially Kubernetes or some other tooling. And then lastly, it's this notion of increasing developer productivity. The way we think about this is that there is an abundance of knowledge that you can gain by monitoring applications and infrastructure when it's running at runtime. But being able to sort of condense that into actionable insights as part of data or code and be able to merge it back with the development process is a very key component of increasing developer productivity. At the same time, not breaking the developer's workflow is a very key concept. This is another area where, for example, if we implement things as part of the CI or CD, whether it's, for example, vulnerability checks or vulnerability scans, whether it's policies that prevent particular deployments from admission controllers, all this information gets fed back to the developers. Now, we do this in many different forms. We can plug into your typical ticketing solutions or messaging solutions, whether it's PagerDuties or Jiro or Slack. But the other part of it is as part of StackRoss, its own CLI, we actually inject ourselves into the CI process. So when a developer is working through their natural flow, we can actually give that feedback in the CI itself, in the CLI itself, and be able to have the developer see that as part of their native workflow. So these are very core concepts to how we develop, and the key concepts are, we want to make sure we're not breaking infrastructure, we're leveraging the tools and the capabilities that already come with your infrastructure, and ensure developers see this feedback in their existing workflow and as early as possible, so they can merge it and basically combine it back with their code base to inherently harden application and build more secure applications in infrastructure versus sort of this disjointed process where you take actions at runtime, and developers and DevOps teams don't really understand what is actually happening. Now, with that, there's also sort of the higher level of what is it that's, whoops, sorry, what is it that StackRox actually offers? So a good way to think about StackRox is in really three big buckets, securing the build, the deploy, and run, and the way we functionally think about that is StackRox helps you secure the supply chain, then it helps you secure your infrastructure, and then finally securing the workloads that are running on top of that infrastructure. So as part of the build process in securing your supply chain, there are some really good practices in place, obviously providing image scanning, this is an area where StackRox cares deeply about operationalizing vulnerability management and workflows versus the scanning itself. And that's why StackRox does provide its own sort of best-in-class scanner, which looks for packages and tools and languages and layers and all these sorts of things. But we also integrate into other existing vulnerability scanner, whether it be Clare, Quay, and Core, Tenable, or other tools. What you see at the bottom here are a sampling of tools. I'll show you actually a more complete list of tools we integrate with directly from the product demo itself. We also integrate with all the registries that are out there, so very standard, whether it's Red Hat or Amazon's ECR or Artifactory or IBM Cloud Container Registry. And then at the deployment stage, we focus on securing your infrastructure. So securing infrastructure is about things like applying the right CIS benchmarks for Docker for CUBE, implementing things like lowest privilege and best practices for security, ensuring things like RBAC controls are properly configured. Things, for example, like segmentation rules are properly applied to your pods and deployments and namespaces, and ensuring the infrastructure that is now ready to deploy workloads on top of it is properly configured so you have all the best preventative and hardening measures already in place. And the reason for this is this sort of creates this funneling effect for us, where if you build securely and you ensure your supply chain is hardened, you ensure your infrastructure itself is preventative measures in place and hardened, then the scope of the attack surface at runtime naturally reduces. So what we can focus on from runtime standpoint, not only is highly efficient, but it's very scalable, it's highly automated. And this feedback loop that we talked about, which is gathering insight at runtime and feeding it back into the build time, over time sort of compounds this value and gradually continuously reduces this attack surface and allows you to really focus on things that are mattered to you most. And at runtime, the components we talk about are really around detection, response, forensics, and general investigations. And for runtime standpoint, we integrate into sort of the existing and standard tools that exist from SIMS, whether it be Splunk or Sumo Logic or AWS Security Hub, as well as all the other notification tools we discussed. Overall stack rocks, as we mentioned, is Kubernetes native, but this means any Kubernetes distro. So it doesn't really matter what the wrapper is around who, we can deploy on it, whether it's managed services like EKS, AKS, GKE, PKS, or whether you're rolling out your own native Kubernetes vanilla flavor using OpenShift. And then on top of that, it doesn't matter where you run it, public cloud, private cloud, hybrid cloud, which seems to be the most emerging component we see all the time. We run anywhere and we can protect anything under the Kube umbrella. Now, architecturally, I'll just show you how stack rocks deploys and then we'll jump into the actual product demo. So stack rocks has really three main components. Stack rocks central, stack rocks sensor, and stack rocks collector. So stack rocks central is one per customer. Stack rocks central is really the brains. It's the UI, it's the API server, it's where the scanning happens and the data analytics and analysis happen. This is also where all the third party tools and other technologies and solutions integrate with stack rocks. So this is where we can output alerts or basically do webhook integrations. Then we have the API, the stack rocks sensor. Stack rocks sensor and collector deploy as Damon sets and their Kubernetes native. They run with read access only. They don't have permissions to write to your clusters. They don't, as a result, need high levels of permission. And as a result, it naturally itself reduces the attack surface that exists in your infrastructure. Sensors also act as become mutating webhooks and act as admission controllers so they can have things like policies enforced cluster-wide, be able to enforce things like other policies that are being written either by the customer themselves or the out-of-the-box 60 plus policies that we produce. And then the collector itself is one per node. So the collector can run in actually two different modes. Our preferred way, which is leveraging the extended Berkeley packet filter to be able to collect system information, processes, network information. This is the piece that contributes to detection, response and forensics. But there are also customers of ours that have sort of more dated versions of the Linux kernel that don't have EBPF and we allow this to also be able to run as a kernel module if needed. The way this works is collectors collect information from every node, send it back to sensor, sensor correlates it across the entire cluster and sends it back to central and then central collects and correlates information across all the clusters and then pushes back down things like rules and permission requirements and controls. And all of this information is pushed back into Kubernetes for enforcement or action. So as an example, if there is potentially malicious activity or anomalous activity or attack on a particular pod or deployment, we try not to intercept the system call table and kill system calls or processes. What we do is we instruct Kubernetes to tear down that particular pod and just spin up a new one. However, because the collector runs on that particular node, all of the forensics and investigation data you need from processes and system calls and network information is collected, stashed and central, and you can output that into your sim or other tools. So you can still go back and do investigations, forensics, what happened, build policies on top of that, yet we're not actually conflicting with your first class. It isn't being your business logic and your uptime requirements. So that's a quick overview of the Stack Rocks solution and what we do. And from there, I'll just jump into the product demo and give an overview of what Stack Rocks tool itself does. Okay, I'm gonna jump in. I'm assuming everybody can see my dashboard. If not, please let me know. So a couple of quick things to mention. When you see the Stack Rocks dashboard, there's two things to take into consideration. First of all, everything you see here is presented as Kubernetes constructs. We don't talk about in the notion of images or containers. We talk about pods and deployments and actual applications that are bundled together in a particular namespace. Next, everything you see here is exportable through APIs. There's no black box. There's no magic under the hood. Every bit of data you see in our UI, you can export through the API. And the last part of it is everything you can parse. So you can say, well, I don't really want to see everything under the sun. I'm, for example, only interested in my clusters that run in production. The same sort of filtering can be applied through our API. So if you want to do very fine-grained searches or exporting of data, you can do the same sort of flags and sets as far as the export goes through the API. So when you land on the dashboard, the first thing you see is sort of an overview of everywhere Stack Rocks is deployed. You can see sort of the overall violations which we'll jump into. You can see your compliance overviews. Again, we have a dedicated compliance workflow which we'll get into. And then a number of different types of violations and risks. And these categories that you see from DevOps best practices to networking tools, to privileges, these are all categories of pre-built policies that come out of the box from Stack Rocks, configured, ready to go. And we'll get into a little bit deeper as to what each of these particular things mean and how they actually get implemented. From a general workflow, I'll sort of walk through how Stack Rocks typically works with customers and how customers operationalize us from the standpoint of vulnerability management starting at the build time, through deployment, all the way through runtime. So customers typically start from a standpoint of integration. So first of all, you can see sort of an overview of your clusters and it's very easy to be able to add new clusters into this environment because we deploy as a Kubernetes application, it's very simple. If you know how to deploy Kuba, it's that simple to be able to deploy Stack Rocks. Typically for us to deploy, it takes, and there's a little bit of a nuance depending on how much of a scale you want to deploy us to, but on a standard scale when customers typically want to do a proof of value, it takes Stack Rocks anywhere between 15 to 20 minutes to deploy. And it's important to mention, Stack Rocks is an on-premises solution. This is not a SaaS service, this entire product deploys on your own premises. Regardless of where your premise is, public, private or hybrid. Typically what customers do is they put our central in a sort of a dedicated namespace or cluster by itself, and then everything else, which is sensors and collectors deploy as Kubernetes daemon sets. So as part of this, the first place customers typically start is around integration. So whether they want to start with some image integrations, these are sort of the out-of-the-box integrations we have. It's very simple how quickly you can get started, which is basically you give the integration name, the type, the endpoint you point it, and we can actually test and validate that that integration works very quickly. These are the plugins for notifications, so Slack, Jira, output into email, PagerDuty, Splong, Teams. We also have our own generic webhook, as well as AWS Security Hub and Syslogs. You can also do backups to S3 or Google Cloud Storage or use our own APIs and create tokens and create scope access controls. So these are sort of typical ways the customers get started. Once they get started and integrate us into their workflow, the vulnerability management side is where they typically start. Now, there's a lot of depth here, and I'll sort of showcase some of the unique components of StackRocs. First, StackRocs just doesn't show you vulnerabilities that exist in your images. We also look for vulnerabilities that exist inside Kubernetes and Istio themselves, and the premise is obviously if your infrastructure and your source of truth is insecure and somebody can own that, it doesn't really matter what your containers and applications are doing and how hard they are if you can take over everything as root and admin, then there's a lot more you can do. As you can see, we sort of break down the vulnerability management workflow across many different ways. First, we can show you the most recently detected vulnerabilities. Then we can show you the most common vulnerabilities in your infrastructure, and you can see we actually pull the CVSS scores and show you the severity of those. But then where it gets very unique for us is we actually query Kubernetes master and we figure out what are the workloads that are actually running in depending on whether it's in production or different namespaces, and we will tell you based on the breakdown of your applications, which ones actually have the most exposure and risk packed in them. And a way you can sort of slice this is you can say, for example, I care about risk based on a particular cluster, and here you can see production is obviously top, and you can jump in just into your production cluster. And from there, you can see different things. You can see policies that have been built against vulnerability management. And at the same time, you can actually see fixable vulnerabilities that exist that are currently deployed and loaded for you to be able to fix. Now, when customers typically start here, after this, they typically move into configuration management. Configuration management is, you can think of it as posture management for Kubernetes and your infrastructure. So these are the categories we typically focus on. Configuration management based on policies that Stack Rocks itself produces, and I'll show you some of those, what they mean. Covering CIS benchmarks for Docker and Kubernetes. Viewing all things related to admin roles, as well as secrets. And the way you can slice and dice configuration management is you can look at it based on clusters, namespaces, nodes, deployments, images or secrets. We also have a full workflow for users and groups, service accounts and roles when it comes to RBAC visibility and configuration. So there's obviously a lot of depth in how we do posture management, how we do recommendations and how we ensure all these hundreds of nodes that exist in your infrastructure are properly locked down. But just to be able to simplify this, I'll show you one simple workflow so you can see from the point of visibility how quickly you can go to actual remediation. So let's take a look at a misconfiguration that Stack Rocks has surfaces as part of its own policy. So let's click on these high misconfiguration alerts. We can see that there's SSH port exposed and it's misconfigured. So now if I click on SSH, I can jump in here and see where is SSH misconfigured. I can see the deployments that are misconfigured are on my jump host which may not be as critical and in my visa processor which is highly critical. So from here I can jump into my visa processor and see what other misconfigurations exist on this particular deployment. So I can see I have Apache structs, I have privileges that have not been revoked and here's the SSH that we talked about. So if I want to fix this now I can just say edit this policy and here Stack Rocks allows you to walk through the policy workflow and then all you have to do is just simply turn on our policy enforcement. So again because of our native integration into Kubernetes enforcement and security is extremely simplified. We actually look at collisions we determine where is the best place for you to enforce this policy and all you have to do is turn it on. So this is where Stack Rocks using Webhooks becomes an admission controller and essentially blocks any particular future releases or deployments of this particular misconfiguration. And as long as you save that that is now enforced. So that's sort of a quick walkthrough of how quickly you can jump from misconfiguration to a resolution. Typically customers when they solve for vulnerability management and configuration management where they usually land is around compliance. So compliance for us is actually a differentiated function and I'll explain why. Unlike a lot of the tools in the industry where they use compliance as a subset of CIS Docker and Kube. We actually built best in class and each control under every one of these particular compliance workflows. So we actually took all the HIPAA controls that applied to Kube and Docker and built them into control same with NIST 800 190 and 853 as well as PCI. So the way you can orient your compliance workflow rather than sort of getting all this overwhelming information is you can say well I either care about particular clusters like production or I might care about a particular namespace like for example I care about my payments namespace and I can jump into this payment namespace and see exactly what compliance issues I have in this environment. So I can see overall I'm at 34% compliance I have PCI, NIST, HIPAA and well NIST 190 and 53 running. So if I wanted to dive deeper into PCI I can see the entire standards here. I can see the past the controls that are passing and the controls that are failing. If I were to look at controls that are failing I can see for example requirements for firewalls that are not in this particular zone that need to be implemented and I can actually see which two particular clusters are impacted by this. This is very important because once we get into our implementation of segmentation and firewalling rules you can actually implement policies or recommendations by Stackrocks to increase your overall compliance with particular policies and compliance requirements. So this is very critical. The other part that for Stackrocks is very important is because we have that collector running on every node we actually check the path for every one of these controls. So if you actually export this not as PDF but CSV we actually give you proof of audit for every single one of those controls. This is a highly automated process and I'll sort of show you a cardinal rule of not to do during a demo which is I'll do a live run of our scan. So you can see we're running 67 deployments here. It's not massive but the point I'm trying to make is if you run a scan here you can see I just hit run. You can see how long this takes. The reason this is important and that's it, it's done. The reason it's important is even if you scale this to orders of magnitude larger you're still talking about tens of minutes. The reason this becomes very important is our customers can run compliance check on nightly basis or weekly basis at least so they can find drifts in their compliance and their violations versus doing this once a quarter or once every six months and having this large volume of fixes they have to go do. This is an area where we've invested a lot of time where we've taken a lot of very sort of remedial lot of manual work and we've made it highly automated and attached a proof to it so you can just hand it to your auditors or produce the PDF and hand it to executives and be able to move on from compliance quickly. Okay, now typically these are all the things you need to be able to make sure you're hardening your supply chain and your infrastructure is properly configured and ready for deployment. So what happens when you actually deploy workloads on top of this environment? What ends up happening is StackRox draws you a visibility of everything that you have deployed. So at a very high level you can see we're looking at our production cluster. You can also look at other clusters if you chose to but let's focus on production here. So everything in this environment in this box that you see is shaded is our production and then we also show you egress and ingress to external nodes and what they're doing talking to you. So every one of these boxes that you see here is a namespace and if I zoom in inside these namespaces is where you see the actual deployment. Now if I hover over these deployments I can actually see ingress flows egress flows what ports and services are listening and the type of TCP processes and services that are actually active. These are all very relevant. They all get logged. You can export all this to be able to validate things. Now the reason this is very important is as we are looking at how you build your images how you deploy and the requirements and the dependencies of your applications we build what is called your active connections map. Active connections map are every path and service your applications actually need to be able to conduct their normal operations. On top of this we also show you what is allowed. Now allowed you can see if I switch back and forth you can see sort of things that are blue versus things that are red. So allowed shows you the entire permissive attack surface. These are all misconfigurations services that are missing their network policies or firewalls and as a result they're reachable without you really knowing or wanting them to be reachable. So let's for example look at our back end and we can see that our back end API server if I click on it has no network policies here and it's basically completely exposed. At the same time we actually build a baseline of what is the healthy and secured communication path between your services. The reason this becomes very important is once we lock that baseline in you actually have the ability to be able to add other services or other potential communication paths that may not be part of the baseline to your baseline if you chose to. So it's not an automated process where everything is either locked or not. You have the ability to be able to determine if you want to add additional flows to your baselines. And obviously this is very important for fine tuning and being able to add other services that may not be part of your manifest or they might be edge use cases you have. Now the typical question becomes okay if I have all this permissive attack surface and StackRox is aware of it what do I do about it? So this is where you can actually look at StackRox and say StackRox take a look at the past week as the baseline that you understand and StackRox generate and simulate network policies for me. This is where StackRox generates an entire YAML policy for you and this is what we chose is for the entire cluster. You can actually scope this down if you wanted to say so you can say I only care about for example my front end and let's say my back end. So if you went through the same workflow you would only generate policies for these ones but for the sake of this particular demo we'll just do it for the whole cluster. And you can say once you generate it the reason we simulated is we treat your infrastructure as code so this is all declarative policies we can check all of our policies against the communication paths and the policies that exist inside Kube and ensure this segmentation and firewalling policy does not break any of your communications that exist. The reason we also simulated is for two reasons one our preferred way is to allow you to share this back through your change control integrated into Slack or any other Git process you have so your developers can merge this policy on the next build or you might be under a particular attack or you might have a breach where your team might actually legitimately need to be able to apply this policy in real time to your infrastructure to be able to prevent something that is happening. And based on that this is how StackRocks helps you lock down your environment and now that I show you allowed you can see that all these other application deployments have now turned blue. To state the obvious we don't actually go tamper with anything in Kubernetes itself but everything that is your applications on top ends up getting locked down. Now this is sort of our entire workflow process to build in best practices harden and preventative measures in place. At the end of this naturally some things potentially get through you have attacks you have potentially exploits or you have vulnerabilities this is where the detection and response part comes in for StackRocks. So this is the standard workflow for how we detect and alert you. Everything in this particular workflow can be exported but let me just show you something. For example if we want to find something that is running with process UID0 because it doesn't actually have anything dedicated. So first of all what we do is we actually capture the entry point we capture the container ID the argument that was passed and written but if there's even more information here we capture the entire forensics data for you so you can actually go back and correlate this and see if this particular information exists anywhere else in your environment. Again StackRocks does not act as a sim or a data lake we have native integration so you can merge and export all this information into your environment and the reason StackRocks is very proficient at doing this is under the hood StackRocks is really an asset inventory management and tracking tool from the moment code becomes an image we track that based on its dependencies its requirements where it's deployed where it's actually loaded and running and all the other dependencies and communications it has. So when we alert on something we give you the entire asset information around this particular deployment all the way from commands it's running to argument even if it had volumes or secrets or additional security context that you needed and then this is the policy that gets generated and input it into your Slack or into any other tool you have as an example if this was integrated into your build time let me just show you a life cycle for example alert that is for build what ends up happening here is anything at build and deploy this information around rationale around description and remediation can be directly inputted into the CLI so the developer can immediately see it if it's violating something so this is again how we ensure the high velocity for developers when they're interacting with this particular tool now finally when you have all these components in place the culmination of all these outputs become what we call our risk metrics and risk ordering this is where we stack rank all your applications that you've deployed from most risky to least risky there's nothing that has zero risk it's just what risk you're willing to take on so let's take a look at our visa processor which is most risky why is that well we can see these are all the policy violations that are causing this particular deployment to be risky we see there's suspicious processes that are being launched image vulnerabilities misconfigurations how this service is actually reachable components that are unnecessary by this application that expand the attack surface all the way to our back misconfigurations now as part of our detection we also have process baselining other than network baselining so we constantly talk about that funnel of how do we reduce the attack surface and limit the exposure so when we actually run these services and these containers we build an entire baseline and one of the unique things StackBox does that is differentiated and this is where we leverage Kubernetes native constructs is rather than just baselining on a container we use what we call pod consents and uniformity which is all the pods should be uniform and run the same way so if there are edge cases or anomalies we use that conformity to be able to drive down and drive out noise and really be able to signal on a particular process or function that is truly malicious so the process baselining allows you to add all these baseline components in yourself if you wanted to StackBox does this automatically and anything that is captured outside the baseline we capture alert and correlate so you can see then bash ran and then these are all the other processes and instructions that ran so somebody then ran app get and downloaded a package all this information is actually collected you also have as I mentioned the ability to be able to add these to your allowed list or remove it from your disallowed list so if you said well I actually want to allow somebody to run bash that's fine you can add this to the baseline and at the same time you can remove some something like CEDO from your baseline and say I don't actually want anyone to run it and since I just brought this out even if I expand this I can actually see if there was a command ran so this is not sort of blind historic events either and then finally if you were to actually look at the entire processes you can actually see the entire execution table and this is sort of the time series view of what executed where so this is very useful when you're doing forensics and incident response and if I were to sort of click through this you can see every different asset here has different requirements and different sort of output as to why something was made to be very suspicious from this workflow you can jump actually back into the network side and implement network policies if you chose to at this point there's no recommendations because obviously we implemented segmentation firewalling across the whole cluster last couple of quick things I'll show you is so these are all the system policies that come built out of the box in stack rocks currently there's 63 but obviously we allow our customers to be able to create new ones or duplicate the ones that are here and modify them so you can actually modify things in here directly by adding or changing the severity which changes the risk scoring you can determine what life stage you want to actually apply this the rationale categorization you can even restrict the scope or exclude a particular scope from this particular policy when you hit next the other things you can do is is you can actually add other policy criteria so you can say I want to actually add for example understanding of CVSS score to this particular policy or because this is shell shock I want to understand for example if there's process activity correlated to this by a process name this is very simple you can combine it as an and or as an or and add it to a different component so it's a very simple drag and drop policy workflow once you've modified your policy when you hit next stack rocks will audit your workloads and will tell you if you apply this policy which deployment is actually impacted so you're not sort of going in blind not knowing what will we really impacted if I enforce this policy and then finally stack rocks will tell you where it's best to be able to apply this so you know your view might be well I'm actually okay letting developers build with something that has shell shock I just don't want them ever to deploy this in production I want to prevent that so you can sort of mix and match how you do that finally all of our API references are built into the UI so this is for you to be able to automate all the workflows and all the use cases we talked about directly from the product itself samples and blueprints incentives of codes are built in our entire help center which is get started guide is built into the dashboard as well because it takes 15 to 20 minutes as we discuss most are our customers have sort of gone on this track of being able to get things up and running themselves and then the last thing I'll show you is the search capability because we have all this inherent knowledge under the hood we allow you actually query on top of that so you can say well you know nothing you showed me was what I was looking for but I'm interested in understanding if somewhere in my production cluster I have a CVE score for example from 2020 and this way we actually parse all the data we have seen across your infrastructure we tell you what violations associated to this search what secrets are associated to this precision research term and we're actually expanding this to make it boolean so you can even add processes or functions or even more specific indicators for a lack of a better word we're allowing you to basically write a search query on the enumeration of your infrastructure that we have presented and with that that concludes the overall presentation for Stack Rock so thank you very much I'll pause there and happy to take any questions thank you so much what is amazing is that we have a lot of questions there's a lot of great stuff in the chat that demo was fantastic thank you you've obviously worked really hard to make a very complete product yeah the engineering and product and sales and marketing teams make me look good they build the really interesting stuff and I get to talk about it and I sound smart well I do want to also introduce Kirsten Newcomer because she has joined us now I know Ali knows her there you are yep hi folks hello you don't know Kirsten she is the what do you want to introduce sure security security and OpenShift DevSecOps for OpenShift so security throughout the stack I collaborate with other members of the product management team so and I did notice Karina may have already been getting ready to ask you these Ali we've been we've been trying to answer questions we can as we went but there were a couple here that that we thought were for you one of which was how would you compare Prisma Cloud slash twist lock with Stack Rocks what makes Stack Rocks stand out and in particular this is Tim he's asking about Notary Service and CICD deployments sure so there are a couple of more notable components of our differentiators compared to twist lock and Prisma one is what we mentioned upfront which is this notion of Kubernetes native when we take enforcement around networking or around for example actions on specific containers we leverage Kubernetes native constructs so we don't collide with infrastructure as an example we're not becoming an inline proxy or we're not shimming the runtime engine and taking actions that's the component that allows us to have substantially lower operational risk and better scaling and as a result it ends up having substantially less overhead from CPU memory utilization that's one part of it the other part of it is is we also tend to be much more open so obviously our integrations into the CICD tool are a lot more developer focused where Prisma twist lock tend to be a little bit more I think security operations oriented so we are a little bit more developer friendly and we have a more rich integration component of tools and services as part of our CICD process and then there's one piece that we didn't talk about which is another core differentiator which is a component which is our plugin called kubelinter which is actually an open source tool you can search for kubelinter from stackrocks which is actually a linting tool that you can download as a binary or point to your for example git repose which gives you dozens of best practices out of the box so you can lint your YELM or helm or YAML charts or sorry helm charts or YAMLs for best practices and sort of the direction of this is to then be able to eventually plug this into the stackrocks platform and get application specific linting so those tend to be the really big buckets we tend to differentiate on is native to Kubernetes and less operational risk and overhead more focused on the developer and the CICD workflow and more open source components that can be leveraged independently awesome thank you and and I'm going to say out loud I know this has been put in chat but folks clearly are aware of the announcement that Red Hat and Stackrocks will be working more closely together in the future the deal still has not closed and so we can't answer any questions about anything that might be post acquisition or post close so so we just today we're independent companies will answer as independent companies just so everybody knows that another question that I thought was for you Allie is from Doric I hope I'm pronouncing that properly does Stackrocks have ML and or AI capabilities and then after that there's the related to that I guess is alerts interest in alerts for anomaly detection without creating rules sure so at this point we're not doing anything I would consider to be AI because I consider AI to be more predictive and we are doing some simple correlations again just to be very transparent I wouldn't consider them to be as far as saying the true machine learning and the reason for it is it goes back to sort of what I mentioned early on which is dealing with infrastructure as code and declarative policies it doesn't really create a lot of dimensionality or complexity and you don't need a lot of cardinality between your data so it allows us to be able to be more decisive and definitive about what we produce so the policies we create right now are rule-based and the suggestions we create because they're based on network policies or segmentation rules are for lack of a better word heuristics and as a result we haven't actually seen the need to build specific ARML now that having been said it is part of our roadmap to think more about how we can sort of layer on additional analytics on top so we can create more insight into detection and response forensics and better recommendations overall and this sort of maps to some of our thinking about how we're expanding our footprint beyond so one of the things we didn't get into is this are we have integrations into ServiceMesh and Istio so in some of those areas to be able to correlate data points that are different or have high sort of dimensionality and cardinality that would be useful we just haven't seen real-world use cases in sort of declarative infrastructure and you know CODA's infrastructure world which is Cuban containers that truly need AI or ML to be able to solve something so as a result we don't have that level of complexity in the product but it's also sort of by deliberate decision because for those tools to work really well you naturally need access to an abundance of data that in an immutable ephemeral world is unnecessary so we've made that conscious decision to tailor towards automation and scalability with low overhead versus sort of overabundance of data collection this is also why we export our data from our API into your existing sims or data lakes versus trying to become that sort of tool that hordes your data awesome and it looks like let's see Chris do you want to relay some of the questions or do you want us to do that that you've kind of put in chat for us so I I just want to usually do it okay go for it Karina so I mean there are so many great questions let's go back to the top stack rocks is cross-platform right so it goes from AWS to OpenShift that's correct we have customers that run us actually on their own vanilla cube and OpenShift we have customers that run us on OpenShift on their own premises and thrust us into EKS and GK or AKS so doesn't matter you can run us across anything and anywhere awesome thank you and I know this one has been answered but it'd be good to answer it live so regarding CNI there is multis to provide network isolation but what about security with multis sometimes we receive comments that it's third that it's a third party solution that provides network isolation and it'll make weak security in OpenShift there you go so I I think that's an OpenShift question so so I'm gonna I'm gonna tackle that thanks thanks Karina so right one of the purposes the multis plugin that that we ship by default with OpenShift is to allow you to have to do SDN chaining and also to have other options that support different ways of working with networking in in an OpenShift cluster and so the OpenShift SDN actually is has has strong security and we have actually tested that in combination with SCC's etc versus certain vanilla cube environments we do we do have strong security with the out-of-the-box SDN I think what might be what that what the question might be referring to is the ability to use other networking features such as SRIOV DPTK or Mac VLAN that bypass the SDN and in those cases they do create a different you might say a different attack vector and they need to be used with caution and you want to evaluate the appropriateness of using that feature versus the risk as Ali said earlier right there is no such thing as zero risk and so when we think about this from a security perspective right you you decide whether you need to use those futures and features and are willing to accept the risk and what mitigating controls you might provide those features are typically used in low-latency environments where performance is critical so thank you Kirsten we had some more OpenShift ones oh about the one about the demon set is this a demon set of agents per node and is the agent yeah that's yeah that that's actually for for Ali but keep going yeah yeah so that is demon sets per host the collector piece and as we mentioned it it runs as a demon set with read only privileges so unlike traditional agents that run and can write and tamper with the host we read only and then we correlate that information per cluster and then instruct Kubernetes to take action so we leverage the Kubernetes as the control plane and then you know for example sidecars that come with mesh or the kernel itself as the data plane to be able to enforce things but we don't tamper with anything on the host itself so I have a generic question that's not in the chat is there a lot of overhead I mean you didn't mention you're not trying to take over storage so can you talk about performance and so we typically do our performance testing on medium size cloud boxes quad core 16 gig RAM boxes and our on our overall overhead is considerably lower than most solutions we've seen out there and we're talking about you know somewhere between one and a half to two percent CPU memory utilization if you run everything as is on a relatively you know hygienic box meaning you're not overloading a number of you know containers and sort of like crushing the IO on the runtime engine itself so to this date we have yet to have performance or overhead via customer feedback or an issue for us this is something transparently that we worked very heavily on two three years ago but over the last year and a half to two years it's actually been one of our core differentiators against other products in the market okay then thank you and speaking of core differentiators of course people are asking about you know how what are your core differentiators against your competitors you know Prisma or Cystic or if you could call some of those out sure so I think generally the core differentiators for us as a sort of a common denominator are we tend to be a declarative policy enforcement tool meaning that we leverage that kube native understanding that's constructs and we are able to be able to understand the relationship between objects from when they're being deployed all the way to runtime so when you build a policy we're the only tool that you can apply this from build and be able to have a correlated quality for deployment all the way through runtime that's sort of the core main differentiator we have against everyone now if you sort of break it down to each specific tool it becomes a little bit different obviously the Prisma Twistlock one we talked about a little bit in the case of Cystic I mean Cystic is a great tool I'm a huge fan of Falco they have really great open source projects they've done really well but at the end of the day it's mostly a monitoring tool that pivoted into a security tool and as a result they're sort of leveraging that underlying collection model to be able to do security as a result they're strong on the runtime side but they have substantial gaps at the build and deployment side and being able to have that full life cycle policy coverage and enforcement and preventative coverage they're also missing a lot of those pieces like configuration and posture management that we do so it all really depends on what the core use case is or problem sets are I would say from a completeness standpoint that's where we shine and then everything for us is built for Kubernetes that's another core differentiator for us thank you and again there's a lot of comments and chat saying just thank you for taking the time to go over all of this and answer questions there's one that says what steps does the community go through to deliver operators or I'm trying to pull out the question let's see I'll read what caught my attention is the recommendation it provides operators it's compelling concept what stats does the community go through to deliver operators with the right information here and not mislead false positives can stir the value of a project for example we can parse Chris do you want to maybe get a clarification on that one too and then also can you deploy and run in a fully air gapped environment I don't know Ali do you want to so I'm assuming the air gap question is for me and then I'll turn around and I'll team for the operators part so the short answer is yes we actually have large government customers that we work with from intelligence to US Air Force to DOD that have been public references and those customers do run us in a fully air gapped environment so yes you have absolutely the ability to be able to take updates and policies and rules out of band and update the product and run the product itself in a fully air gapped environment thank you so regarding the false positives question absolutely that's a that's a complex scenario and anyone who delivers vulnerability scanning solutions or consumes the results of vulnerability scanning solutions is aware that there can be challenges there I'll try and keep this a little short because I know we've we've only got a few more minutes left. Red Hat does produce its own set of vulnerability data its own vulnerability feed which can be consumed by vulnerability scanners and we do that in part because our product security team evaluates newly discovered vulnerabilities in the context of what we ship in Red Hat and what we enable and don't enable so sometimes our severity rating for a CVE is different in Red Hat ship content than in an upstream project and also sometimes there's mitigation in place in a Red Hat solution that's not available obviously if you were just using vanilla or upstream and so we make that feed generally available and we've done a lot of work over the last gosh six to 12 months to improve that data feed to help reduce false positives and we launched a project called the container scanning vulnerability project to work with all of our partners to help them more easily consume that new feed and to help them provide better data for Red Hat solutions and so that is a work in progress and that will be will be continuing to work with all our scanning partners on that thank you can you also address the compliance operator and how Dacrox fits in with that sure and Ali does a great demo love seeing all of the compliance checks that Dacrox offers one of the things also that's been in progress for a while really keeping my fingers crossed we can get this out in February the CIS the CIS Kubernetes benchmark is designed for vanilla Kubernetes CIS recently has started adding distribution specific benchmarks and so we have we're very close to publishing a CIS open shift benchmark in the near term because you know open shift is operator driven we manage the configuration settings differently than vanilla kube are and so are the the open shift compliance operator we also recently shipped compliance checks for the CIS benchmark we call it inspired by the CIS kube benchmark until we get the the CIS open shift benchmark published so if you're interested in scanning for CIS compliance with open shift the compliance operator is your better bet for right now it's available with any open shift subscription the compliance operator will also scan at the rail core OS layer and we have we checked we do a number of checks um that fit that are subsets from the NIST 853 controls so you might find using the two together to be valuable thank you thank you very much and I know you'll have more after the acquisition closes on that that we can't address right now but in a future briefing I'm sure let's end on this last question can stack rocks be installed in one place either standalone or part of a cluster and scan and secure dissimilar systems I'd love to have one instance to be able to scan ECSEKS OCP vSphere 7 with Tanzu and also standalone quay image repository so this would be you Ali yeah so I just want to make sure I understand the question correctly so when we're talking about standalone there is a way to do that the way to do that is is then you would have to have standalone deployments of central because the scanner is part of central itself now we're working on how we actually sort of decouple that the short answer is yes but there are some nuances about how you go about doing that which is not sort of natively correlated across multiple instances so it really comes down to how you want to implement it so the short answer is yes yes awesome thank you very very much would you like to leave on some parting thoughts with one minute to go either of you we're excited to join red hat and we're excited to have you all right and thank you again for coming and giving the overview and the fantastic demo and answering so many questions and I know there are more that haven't been answered so I will copy all of those down and we'll see what we can do so thank you everybody for joining us thanks for having me it was great for everybody to join thank you very much for time thank you and Chris would you like to see us out