 We're rolling. We are live streaming on the internet today. I'm Michael Waight and this is the OpenShift Commons Briefings. This is our Operator Hours show and today, we have with us Aqua Security and not just anyone from Aqua Security, but we've got Svi Keren, who is their field CTO from Aqua joining us in our very own Dave Muir, who's an essay focused around our top tier software partners. How's it going, fellas? Very good. Thanks for having us. Yeah, fantastic. Yes. Yes. Welcome to the show. We've got one fun filled hour here today. We're streaming live on Twitch. We are on YouTube and we are on Facebook and people will be able to be asking questions on those shows, on those channels as usual, and any questions that come in, we'll be able to address them down here. In the chat. Svi, you are the field CTO at Aqua. What does that entail? So it's a pretty new role in the industry in general, but the way that I like to think about it is that if I was the CTO of any of our customers, what would I do to make containerization cloud native security to be the best that it can be for that customer? So it's a pretty encompassing role. You know, it goes product process customers and it's pretty exciting. I've been doing it for about a couple of years now. I was going to say a couple of years, but you've been at Aqua for quite some time, right? Weren't you one of the original employees in the company? Yes. So I joined shortly after the company was established. I was actually the first employee in North America. The company is based out of Israel. And yeah, so I started doing basically everything, just walking with our CEO and field CTO, just the three men crew going into potential customers in New York City, knocking on doors, doing everything you need to do in a startup environment and then kind of grow with the industry. Yeah, that's pretty interesting. So you've been there from basically close to inception until now where Aqua is a household word and a pretty mainstream security vendor in computing. What's changed during the time that you've been there? And I'll just give you an example of why I'm asking. Like when I started at Red Hat, it was in 2002 and I was a solutions architect. Dealing with customers and when we would go in and I'd be supporting my account executives, we spent most of our time as opposed to trying to sell Red Hat product, trying to sell open source and convincing customers that open source was good and that. It wasn't just people with blue hair and skateboards and it was really challenging to be able to have to sell open source, convince people of why a paid offering is good. You fast forward to today and we don't have that challenge anymore of trying to convince people that open source is good. We're just trying to now convince them that the Red Hat product and technology roadmap is good and is going to solve and meet their business problems. What have you seen at Aqua over the years? I think, you know, when we started, containerization was really in its infancy. It wasn't that go to application rollout process that it is today. We had to explain to potential customers who maybe had one Docker enthusiast there, you know, what this is, you know, bridge that gap between DevOps, such as it was and security, telling people why they need to think about security specific for these environments. And then, you know, once the industry took off, it became really how to bring security into all that extreme series of innovations that happened all the way from developing containerization on mass, connecting it to cloud environments. Kubernetes became a thing, of course, OpenShift became a thing. And it was really accompanying the entire growth of the industry. I don't have to luckily to explain to people what container is anymore. But I think the concepts of securing this environment are still pretty much something that we need to explain every single time. Absolutely. And I think that's part of why we're here today. Now, you folks have been working with Red Hat, as I said before, for years. I know you folks have a Red Hat certified container. You have a Red Hat certified operator for OpenShift. And, you know, what that means is, you know, customers can have the confidence of deploying your products along with our products in a production environment and feel, have that Reese's peanut butter cup moment of the peanut butter and the chocolate have come together. And, you know, they can get the support they need in the production environment. Are you folks in the Red Hat marketplace as well, operated by IBM? Yes, yes, we are. So you can get the Aqua operator. You can just click to buy it and deploy it in your environment. And it will secure the OpenShift environment from the inside, right? All the components of Aqua are one in OpenShift. And we are fully integrated with the platform. Okay, great. Well, Dave, I don't want to do all the talking here. Why don't we, why don't we, why don't we get you on camera? How are you today? I'm doing great. Thanks, Michael. Glad to be here. And what's your, what's your focus here at Red Hat? Yeah, as you know, we have a lot of focus on our partners. There's a lot of technical folks, a lot of sellers that deal with partners at Red Hat, which is great. I'm specifically focused on our security ISVs globally and especially around the function to get them to market. So I help our security partners understand how to get certified, get them in front of the right people in Red Hat to complete that certification. Once, once those certifications are done, then we do joint solution offerings together. So we do things like webinars, things like this, like OpenShift TV, blogs, create documentation just to really educate not only externally, you know, all of our joint customers, but also internally as well throughout Red Hat. Okay, great. Well, don't let me, don't let me, you know, stand in your way. Let's, why don't we get started here? Well, sure. Yeah, so what I wanted to first talk about is one of the major items we're doing this year as part of my team, part of the security team. And I've been working with a lot of folks at Red Hat internally across teams and who are focused on security to have a sort of come up with a DevSecOps unified vision across the company. One of the things we're doing around that is producing a series, a monthly series of security topics. So what you see on the right hand side are the monthly topics. They actually map to a framework, which I'll show you here in a minute, that we've come up with to help categorize security in a DevOps environment. There's a lot of different security functions and methods and categories you can come up with. You Google, you know, security for DevOps, you'll find 15 different answers. So we've worked with the industry and worked with our great partners like Aqua to come up with this taxonomy. It has nine different categories. And there's 34 different security methods underneath those. So what we thought we would do is break that up into, you know, every month talk about one of those categories. Now vulnerability is March and vulnerability is actually not one of those categories. It's one of the subcategories underneath application analysis. But we felt it deserved its own month because a lot of things have happened around vulnerability and vulnerability analysis this month. I'll talk about a new certification that Red Hat has G8 actually last month in February. And then, you know, that kind of lends itself nicely in the subsequent months you see there to the rest of the categories. But if you if you want any more information, you'll see some links there on the left hand side red dot ht forward slash DevSecOps. And by the way, the DDS and the O do need to be capitalized. And then so every month what we're doing is we're producing a lot of content for you to understand that security category. So every month we're going to push out three podcasts. Those have been for March. Those have been already dropped. And then to open shift TV shows. This is the second one we've done in March. And the first one was a couple, couple weeks ago as well. That series, the, the one we did a couple weeks ago is called DevSecOps is the way. So really excited to bring you all this content. You know, every month this year around security around DevSecOps. And this involves not only just Red Hat, a key part of our DevSecOps vision is our partners and how they complement a full comprehensive DevSecOps solution for our customers. So that's the plan every month. And since March is vulnerability month, this is the last day of March. I did want to talk about the certification that we released about a month ago. It's called the Red Hat vulnerability scanner certification. And the reason why we created it is probably shown in this picture in the middle here. That's my daughter, her hair is getting pulled out. And what's been happening over the last several years in the industry with scanners, specifically that scan third party dependencies, and either applications or containers is that our customers will get a report from one of our partners, and it will not match up to what Red Hat says about our content. So if you can imagine that's pretty frustrating, you don't know who to believe or how to triage. There's a lot of manual steps to really find the true answer. So about, let's say six months ago, middle of last year, we formalized a pilot and invited a bunch of partners to go through Red Hat within this pilot and create this certification. And just like our container certification operator certifications, where we can say this partner has met the requirements of the container scanning certification. And so when you get a report, you'll see a lot less discrepancies between what Red Hat says, and our partner scanner report says. And it was, it was not an easy process for both Red Hat and our partners. A lot of times our partners had to modify their code. And the key requirement was to, there are a couple of key requirements. One was to make sure that Red Hat's providing the right feed to our partners and they're consuming the right feed, which is our oval version two feet. And then another requirement was to try to add information to our partners, scanners that shows Red Hat severity and Red Hat links. So when you get that report, you'll see, well, here's what Aqua says about this vulnerability. And maybe side by side, here's what Red Hat says about this vulnerability as well. And you can click to go see more information. So in the end, we will, you know, won't be pulling our hair out. We'll look a lot prettier, happier. It's like a dream come true when you get that report and it matches up to what Red Hat says. Now, again, this is on containers right now, specifically, and only for Red Hat supported packages. One of the reasons why we value our partners in the scanning industry like Aqua, and you can see they were certified and went through this process, is that Aqua can then not only tell you about the Red Hat components, but all the other dependencies that you pull into your containers that you pull into your applications that aren't Red Hat, right? We don't care about, maybe it's your custom or file that you're bringing in all sorts of dependencies, or maybe you're using another distribution, God forbid. Aqua can tell you that the issues around those and the risks around those dependencies. And so they enhance and they extend Red Hat's scanning capabilities to provide you with that complete picture. We're really excited about this vulnerability scanner certification. More partners are coming on board and we're recruiting more partners as well. I wanted to jump also now, so that's vulnerability information and this slide here talks to some of the categorization I was talking about earlier and how we framed, you know, this year with our monthly security topics. So I mentioned the nine categories in the 34 security methods previously. This is what it would look like. So what we were able to do after determining those categories is to map the integration spots onto a DevOps pipeline. You can see here everything from IDE all the way to, you know, building, testing, releasing, operating in your cluster. And we've been talking with our partners in the industry. We feel like, hey, you know, yes, you can have CNI plugins at the cluster, or you might want to look at a secrets vault actually all across, you know, the pipeline. So what we're able to do with this diagram is then work with our partners like Aqua and create a solution, a joint solution between Red Hat and Aqua that we can use really as a conversation starter as a framework to go to our customers, to go to market and help folks consume security a little bit easier when they're trying to, you know, make the journey towards DevOps or make a digital transformation journey. So as you're going through each piece, you can look, for example, at build automation, you can say to yourself, well, am I doing some SCA or software composition analysis, understand your dependency in build? Am I looking at network policies? If not, maybe I should look at Aqua to do those things. So this has really been a helpful guide for our partners as we go through the journey to help them get to DevOps. Ultimately, we like to coin it DevSecOps. I know they're the same thing. There's a little bit of a debate around that. DevOps in its beginning has always had security in mind, but just calling out that security word that Sec word really helps to make the point that security does need to be automated, does need to be baked in from the beginning in your journey. So from a slides perspective, I think I'm going to stop sharing here. And, you know, one thing I wanted to mention, and see, I'll bring you into this conversation as well as we've been talking this month about vulnerability analysis, scanning third party dependencies. But there are a lot of complexities around that. It's just not as easy as that diagram, right? And vulnerabilities is part of looking at risk management. But if we look at that diagram, could you kind of double click on that and help us understand, you know, more around that topic, more around complexities of, you know, application and risk management? Yeah, definitely. So I think one thing to remember is that when we talk about vulnerability management in a lot of people's heads, this is around vulnerabilities and images. And vulnerabilities and images are a result of the fact that we're using a lot of open source components and those open source components of course have software bugs that some sometimes have security implications. And we need to manage that risk. But the risk of the image is just one of the risks that we need to look at when we look at the entire environment. There is the risk of what we're deploying the image. There is the risk of what we're deploying on the infrastructure, the hosts. You know, we can take the best image and have no critical vulnerabilities or no risk in the image whatsoever and then deploy it on an environment that is either to open or not well managed and that's not going to be good either. And we also have to take a look at what happens after the image instantiates into a workload to make sure that the way that the workloads are defined and the way that they're protected is all part of that. And I think as you go through your series, as we go throughout the year, I think we're building on that, right? So we're going to talk about vulnerabilities today and how we can read out the risk in images. But it's important to understand that vulnerabilities are not the end-all, be-all of a security program and we need to pay attention to the other parts as well. Yeah, that's why I think this session today is such a good segue and a good reminder for folks that you're right, vulnerability analysis is great and you should have it, but it's not everything, right? It's to get a comprehensive security solution. You have to look at other vulnerability points and things like that because there's a lot of these different attack vectors, right? You could talk a little bit more about those attack vectors and how they play a role in this as well. Yeah, so even if you take the vulnerabilities in images, we got to remember that the vulnerabilities are in images are potential risk, right? And image is not a live thing. It's inert. And let me, if I can, I'll just go ahead and share my screen here and I'll show you some of the analysis that we're doing with regards to what is the vulnerabilities in image. So this is how AQUA looks at the list of images. And here's an image here. You know, it has a couple of, you know, one critical vulnerability, one high, one low. If we'll click on that image and get the list of the components that are vulnerable, you know, we can see that there are, you know, a few of them, this is Alpine based. So you can see that there are some components there. You know, we can look at that vulnerability and we can see how the NVD regards it and so on. But I think in a lot of organizations, especially because there is a vendor fix for that image, right? You can update the image. A lot of organizations are going to let that image pass at least the initial or at least give it a gray spirit in order to be used in the environment. However, this image happens to be one of those images that we found quite a long time ago actually on the Docker hub and when we passed it through what we call dynamic threat analysis, which is not just what is the inventory of things in the image, you can see that image going to do inherently. As you run it, we are seeing some really, really weird behaviors. We're seeing that, you know, it opens network connections, it opens SSH connections, it downloads files, and all those behaviors actually resulted in a cryptocurrency miner being instantiated with the result address going to the country of Iran. So those are, you know, no disrespect to anyone else, but the idea is that we need to kind of understand what is the difference between our potential risk and what is the real risk in the image. The same way that we can have an image that has no potential risk or very low vulnerability footprint, but when we deploy it on an infrastructure that is not well managed and has, you know, open ports or the OpenShift or Kubernetes API is open, we can still introduce risks into the environment. So well, again, vulnerability management is great and I think we need to do a lot more and we can kind of go through how we deal with the way the data can be overwhelming to development organizations where they're getting those rejects. But that's, again, not the end or be all and we need organizations to kind of move forward and make sure that they have a complete program that can deal with image vulnerabilities, but then anything else that happens in the environment. Yeah, I'm wondering so because this topic about vulnerabilities and really around dependencies has been out in the industry now for a while. Do you see sort of out in the field. More companies getting a better handle on scanning for their dependencies earlier. And then I guess the second part of that question is, you know, what do you see in terms of when you go into companies, the biggest gap that they have in terms of detecting vulnerabilities across the lifecycle. That makes sense. Yeah, I think the, the, the, the adage of shift left basically putting our vulnerability management and risk reduction program as early as possible in the development lifecycle is a really good thing. On the other hand, we need to make sure that the people that are absorbing that information really know what to do with it. So if we, if we take for instance, an image, let's say this middle image here, you know, it has, you know, quite a few vulnerabilities and we can kind of take a look at the list of them and, you know, we have the rating of that vulnerability and, and, and, and you get, I don't know 20 30 40 maybe even a few hundred if it's a really, really old image. And then the question is, what do you actually do about it. So if I'm a developer or a software package or DevOps person, and I get that list of vulnerabilities. The question I have is, okay, what can I do about it is you can you can see Aqua kind of provides a few low hanging fruits right so all of these have vendor fixes right so if you open the vulnerability card this happens to be a red hat vulnerability and that that's a particularly old image. We can see that, you know, the scoring that we get is, is from the CVS but we also get the, the red hat score which is actually in this case actually in line with, with the vulnerability but it doesn't have to be. And then, you know, what is the, the recommendation so first try to update the package to a newer version. And then if you can't, you know, can we do some mitigation, you know, using runtime policies, can we do an acknowledgement maybe work with our security organization to see if there are any other factors that can compensate for that. However, I'll point you into these two, these two it's kind of small here but, but an exploit is not available. So basically this vulnerability doesn't have an exploit in the wild nobody has published a script and executable that can take advantage of that vulnerability. And also you don't have any workloads running right so we're scanning the images from the registry or we're scanning the images as they're being produced in the pipeline but we haven't, we don't have any in this case we don't have any images that are running or we don't have any containers that are running in open shift from from this image. So the question is how can we prioritize that right what what is what are all the data points that we can have in order to not be hit with a list of 100. So one of the things that we can make sure that our are the best thing that we can fix that are going to give us the best bang for the buck. And in that case, what what I came up with is something that we call the risk based insights. And that is if we take the entire vulnerability inventory in our environment you can see that we have this slider on top that is really kind of weeding out the the medium to critical right because the low or the negligible vulnerability especially when vendors like, you know, the owners of the repositories like like red hat or other distributions are maybe reducing the score of the vulnerability versus the nvd. We only need to show the things that are really really important per the distribution. And then we want to understand if there are things that can be exploited from the network because remember nvd is a general program it precedes containers right so it's a general purpose. VM vulnerability repository so not everything that you can you see on the nvd as a high or critical is something that can be exploited in a well managed container as environment so if you need to be on the on the operating system itself with command line access with even root access. We don't want to exploit that vulnerability. And it can be just directly exploited from the network it doesn't mean that we need to ignore it but it's probably not the first thing that we're going to fix. And as we go along, if it's available from the network if if an exploit is available, if if an exploit is available and can be executed remotely. If also you have some running workloads from that repository, then we can take the even in my tiny environment here the 1500 vulnerabilities that we need to deal with and really, you know, reduce them to the 21 vulnerabilities that are really really important in my environment because these are the ones that are running so so the way that organizations I think are starting to deal with this is yes we need to shift left we need to make sure that the owners of those images the owners of the the software have an understanding of what their are from, but also we can't expect them to fix 100% of the vulnerability so we need to give them some idea as to what are the things that they need to concentrate on first and then we're back to the less the less important things. Yeah, you're breaking up a little bit there at the NC but I think your point is is great because one of the if you do the research on DevSecOps, especially around culture. You see, you see what many different guidance points, one of them is, you know, risk focus right. So if you're going to try to resolve every single vulnerability that is found like in a container image you're going to be spending a ton of time, a ton of wasted time. And even you know like you said even looking at maybe criticals and highs and moderates. That could be some wasted time as well, because if you're just looking at the NVD right as you know, they don't they only have the base scoring they don't apply temporal and environmental metrics. And it could be if anybody scan a container with a with a scanner you'll notice there's there could be hundreds of vulnerabilities. So the, to something like this, I mean your tool here is critical for companies to keep DevOps flowing right and to not slow it down, so that you're not gated by unnecessary analysis unnecessary triage on vulnerabilities that would never be exploited, you know in your environment. Yeah, that's true and that the process is what's critical here. Yeah, you know, we, there needs to be some good intention on both sides right we can't expect the development side to fix 100% of the vulnerabilities on the other hand, having security, putting the rule that says that you will not have any critical vulnerability in any of your containers whatsoever is also probably not feasible because a lot of them, you don't do not have a fix and then you usually stock you really can't use that component. But also we can do a lot of mitigation of vulnerability by just managing our environment well and you can you can see a lot of those vulnerabilities do not have remote exploits. And a lot of those those vulnerabilities, you really need to have some kind of full told in the environment in order to, to take advantage of them and that's why, when we talk about vulnerability management and how we deal with vulnerabilities across our entire development pipeline. We need to understand not just the vulnerabilities in my, my environment but also in my images but also in my environment so for instance, if we take a look at the infrastructure right the hosts and this is my my open shift environment. And we take a look at, let's see the ones that I'm managing. You can see that these two hosts which are the the optimized operating system for for open shift really don't have a lot of vulnerabilities in them actually that there are not no detected vulnerabilities in them. We've taken our, our open shift environment or dog environment and deployed them on a hard and operating system that has very few components that the containerized environment do not need. And that allows us to then maybe not mind as much if we have some of those vulnerabilities that cannot be exploited remotely because you really can't access the host. So, you know, as a security professional I'm being very, very careful and I really don't like to tell people you know reduce a control or don't don't don't implement a control. On the other hand, I think we as security professionals need to be reasonable and make sure that the, the way that we ask our development colleagues to fix their images is first of all something that they can do. And it's something that makes sense for them to do because they still have an application to develop, and they still have an application to roll out. And that's, that's kind of where organizations needs to have a good communication, good understanding understand what the risk posture overall, and not just hang their hat on the vulnerability management of images and try to kind of pin all their security problems on that, because, you know, this goes one much wider than that all the way into the runtime environment of where those containers will eventually run. Yeah, that makes sense. So let's talk about the points in the life cycle where you would, where you would scan optimally scan obviously CICD, but what else, what other points of integration are really important to have a scanner. So if you think about the life cycle of the image right you you want to first of all test your base images so if you are building on base images, you want to make sure that you're not introducing risk just by having that base image and actually resolve that and we can show that where vulnerabilities are coming from if it's coming from the base layers of the upstream layers. The other thing is that we, we need to test in various places. So, for instance, an organization might have a curated registry that people can build on so they need to have the latest version of Red Hat, let's say that I'm developing on Red Hat 7, and there's always the latest version of Red Hat 7 in my registry so I can build my images on top of that. And if I then we decide that we need to move to Red Hat 8, then I'm not taking it directly from the repository I'm taking it from that curated registry where the organization has already tested it, and has blessed that image and makes that available. So first of all, getting a handle on the base images is something that we absolutely recommend and have been recommended for for the past five years. The other thing is once an image is produced, we want to let people understand as soon as possible, what are the potential issues in that one in that image. So, integrating with the development environments with the build environment, integrating with the CI tool aqua has a command line scanner that can be employed anywhere inside of the CI the aqua UI to do it it's integrated that's the version that's been certified with with Red Hat. And, and then once an image gets to the registry. It's also important to understand that the vulnerability posture might change over time because vulnerabilities are discovered on existing component. And because we scan the base image and gotten a snapshot in time, and then we scanned our artifact and gotten a snapshot in time, doesn't mean that our job is done, because we also needs to understand for the life cycle of that image, whether or not new vulnerabilities have been discovered on the components that are that are already in the image. So, so one of the things that I could do it will connect to the registry, or the image stream in open shift and then continuously scan those to make sure that if there are new vulnerabilities that are found, we will be able to to understand what those are and then that may change the decision on whether or not we want to to employ that that image. And then as images are running as containers are running. We also need to continuously scan those those as well to make sure that, again, if new vulnerabilities are discovered or if new files are added, we have the ability to see what the real risk posture of that that artifact is. So, so it kind of goes up and down the stream there are many points where we want to scan. And we want to do it over and over again because just a point in time is usually not enough to give us the current risk posture of a particular artifact. Well, and actually, we have a question that came in so I think it's relevant to what we just talked about the wallet is asking can I verify the image in a host was the canonical image pulled from a trusted registry not tampered with this did not happen in cluster D scenario last couple of weeks. Yeah, so one of the things that that we do with Aqua is that we, as we as we scan an image we also take the metadata of the image and including calculating a hash of that image. You start by its authenticity as you roll it out. And, and one of the things that that organizations I think are just starting to figure out is that just because somebody pushed an image to the registry. The implementation of registry back from the Docker days, ever since they kind of start which is another, you know, vestiges of practice that we probably want to get rid of we really don't want to use a common test. We want to use a common tank for different images we want to use unique tags for images in the same repository. And we want to make sure that that there is a binary hashable verifiable match between what we developed what we put in the registry and eventually what we what we deployed, because yes, a registry will, if you if you can authenticate to it will will happily accept, you know, to override an image and a and a Kubernetes based environment is also going to happily accept almost any image from a registry that it can access and it can authenticate to hopefully that helps answer while it's question great question. Thanks for asking. It brings up another one about gates and where how have you seen like admission controllers or gates being used within a running cluster and what type of scenarios do you see those being used. Yeah, I think image acceptance I think is a really, really important pillar in in our risk management program. As I said, a Kubernetes cluster will happily accept any image of a registry that it can connect to and there's a few problems with that first of all, nobody deletes old images from the registries and the image streams so so there's always the potential of by mistake you will use an older image that have a vulnerability posture that you don't want, or even, you know the software is going to be out of date that you're trying to roll out. So, so the question is how do we manage that everything that we're going to roll out is going to be the most current and the most secure the appropriate thing that we want to to deploy and admissions controller are the the tool they actually you know that's what they are they they would control which are admitted to the environment so you know in the Aqua environment if you deploy Aqua on your cluster something to be called the Kubernetes based enforcer. You will get an admissions controller that can verify the security status of images and each image that is being scanned by Aqua, based on policies that define the risk tolerance for that particular repository is going to have a status either compliant or non compliant. And one of the things that you can do in Aqua is you can decide not to accept non compliant images. On the flip side, you also have a big chunk of images that may not be known to your security environment. And another thing that you can do with Aqua is to to not allow on registered or basically unfamiliar images. To talk about the acceptance gate. We asked ourselves three questions. Is it something that we know about. Is it something that we expect is it like the right time to deploy an image. And then what is the risk tolerance that we have for that particular deployment and we need to have that flexibility, because a, an engine x that is being deployed as an internal component to a very internal application might have different types of images and an engine x that is deployed as a front end. So, all of these decisions are kind of culminating in that admissions controller, which need to have a decision tree of is that image expected do I do I know that I've seen it before it is coming from my organization or somebody else. Is it something that we expect is that the appropriate times it's something that is deployed in the right way is it is the the ammo file for that deployment that we accept. And then of course, what is the risk tolerance for the images that's compliant or not compliant for the environment that you're trying to roll it out into. Great. Looks like wallet has a follow question so. See if I want I want to pull from specific repose like from quay.io. He needs a custom validating mission controller. Can aqua of validate specific repose like Paul from quay.io open shift but reject from quay.io bad actor. So, as part of the controls part of the decision tree of the admission controller is, is which logical registries. We want that image to come from so the way that you do it in aqua you will define your registry as a logical registry you'll give it a name. It can have various IP addresses it can have maybe even a load balancer that points to different physical registry but as long as that's that logical registry that you want to pull from. Basically put a ruling aqua that says well for this environment even even on a namespace level for this namespace. I only want to accept images that come from this logical registry that I predefined and you can add the conditions of and I've seen it before and it is compliant with the security posture that I want to have for that for that name space. Cool. And I might put you on a spot here actually not just a registry repo part of a public registry. Was while it's yeah so so that's that's that's a nothing right so the scoping rules is basically you can treat the entire image path which is registry colon ports slash repository path colon the image tag. You can treat this whole string and apply it's not necessarily regex but you can apply pattern matching on that on that image. And that will make sure that only images that have that specific patterns are going to be accepted into namespaces so the image name damage path is also something that we look for in the in the acceptance policies. Cool. Well it says thank you. I wanted to ask you know in the field, because we hear about vulnerable third party dependencies all the time. Do you see a lot of a lot of this scenario that we've been talking about where a bad actor manipulates an image or adds malware to images. How often do you see that out in the field. Yeah, so we, you know, it's not super common. But as you've seen earlier with that image that had that Bitcoin minor this is an image that was actually on the Docker hub so so somebody had put it out there. It was part of a repository that did did useful things. And we're seeing those from time to time and if you look at the press, you know, different research teams or different companies. The research team is following up on that as well. You know, advertising that there are those instances and the, the supply chain problems are are not limited to whole images right we had the, the, you know, and MPM event stream incident of a couple years ago. There was a survey that was done that said that, you know, there's a significant amount of those. Let's say let's call them not really well governed repositories on GitHub that that might allow somebody to to insert some malicious code it was an incident that just came through my feed yesterday about about a something in GitHub so so all these things are. And there's no, there's no nvd for that right there's no registration, you know, global list of components and versions that we know that are going to be dangerous to use because they were they were compromised. And that's why the dynamic analysis is so important. So if you're accepting even a base image from an external source put it through a dynamic analysis run it for a little while. See what it does make sure that it's not going to do anything bad. If you then put components on it, especially a new stream of components, you know, build your image and again do a dynamic analysis on it. Try to use it for a while in a sandbox trace what it's doing. Make sure that it's not doing anything bad. I think we're at the point right now where organizations need to take responsibility. For themselves on the images that they produce and the components that are that are using in those images. I would anticipate and I don't know that for a fact that I would anticipate that at some point there's going to be some central registry for for those components that are just common sense right. Don't use a component out of a GitHub project that doesn't have latest latest commits that don't have a lot of stars that you don't see active participation that you don't see a lot of governance around them, you know, try to understand where you're sourcing your your components. And, and I think that will, you know, go a long way towards ensuring that you're not introducing anything dangerous. And, I think, as you know, you know, you can have all the vulnerability scanners in the world looking at everything but but there's always the chance that a vulnerability isn't disclosed, you know, in the public. I think you might have a demo to kind of show us. Yeah, that can happen. So that's that's yeah so let me let me kind of go into into that and I hope my network glitch hasn't really destroyed my environment but but the idea is that we have. We, a lot of times you see it in in sub component so this is this is my my WordPress. I'm kind of port forwarding to to my open shift environment. This little plug in here does then mobile plug in is is a component that has a an exploit that can that can cause a remote call code execution. And that means that you, you know, if you if you deploy it inside of your WordPress environment right this is a component that can be added later to the image it's something that is dynamic in that this environment. And it's not disclosed you're not going to see it on the nvd. You're not going to see it because because you know there's no vulnerability feed for that. And what we saw was that, and I'm going to go into just exec into this container and I'm just going to show the process list here. I can actually exploit it by running a little Python program that is going to start again a cryptocurrency miner into that environment. And if I do that you can see that just by going into that environment you can see that system you left here. And you can see that starting to to get to take a lot of this the CPU of that of that part. So, so that's, that's part of the problem right so so even if we do a lot of vulnerability analysis. And we think that are in, you know, we don't have anything bad coming coming in and all our vulnerabilities are, are, are within acceptable tolerance. We can still be susceptible to those zero day to those, you know, unknown stealth attacks that are still out there. And, and what you can do you can, you know, we'll talk about probably later in the year in September is about runtime is that that a lot of the vulnerabilities that that we know about but also the ones that we don't know about can be mitigated with some runtime control so so if I go into my my open shift environment and I try to do the same thing. What what I'll see is that as I go into my pod which is hopefully up and running. And let's just make sure that my connection is good. So if I go into my part and kind of put the the processes to get and try to do this with my with with this environment. There is an aqua component that runs inside of open shift again as part of that operator that is going to then sense that this is a file that that was that somebody tried to add into the container that was not part of the original image and therefore we're going to deny its execution. So our fallback, even with good vulnerability management is to deal with how we want to have our containers running. Monitor them make sure that we deny execution of anything that is not part of that container part of that application. And at the end of the day, it's really all about what is appropriate to run for the workload for the business purpose. And security professionals even developers sometimes kind of forget that you know we're developing software for a reason right a soft piece of software needs to do something. And if we can discover if it's something that it's not supposed to do. That is going to be our fallback doesn't mean that we need to be completely blase about introducing a vulnerabilities into the environment, but but it just kind of kind of goes to show you that vulnerability management, you know will not be able to protect you against this particular environment. On the other hand, they are mitigating controls in runtime that can really do a lot of good in these kinds of environment. Yes, excellent demo. I know we have about eight minutes left I'll close here in a little bit. Before I do I just wanted to ask you do you have any interesting customer war stories. Obviously you don't have to name names but for example you've walked into a customer. They had no security controls in place you ran aqua you found all this interesting stuff. Anything like that that you could share. It happens almost every single time it almost happens every single time where we go into into an environment and and and either, you know, on the images themselves or on the hosts or on the Kubernetes based environment or in the cloud you know you find those those openings for attackers to gain a foothold and and you know coming from a security background. It's an old adage that that you know security needs to be perfect 100% of the time, and attacker only needs to be successful once, and that's you know the if you if we sum up the entire cybersecurity methodology that's that's it right we need to be perfect. 100% of the time. So, so I think my my interesting war stories are really not about security findings but around the process right early on and I think to this day from time to time. When we go into into a customer when we used to actually physically go into a customer and sit in a conference room. That would be the first time that security and DevOps actually met. So people will be working in different parts of the building or different parts of the company. And it will be the first time that they were sitting face to face and really hashing up because Aqua deals with both sides right we deal with the development side, and we also deal with the cyber security side. And sometimes they are not you know we went to a an energy producer, and, and we were talking to the people that built the applications and then they said well why don't you go to the sock the security operation center, because we're going to get events out of Aqua and kind of correlate that with the rest of what's going on in the environment. So we were led to the sock go went to another building went through some doors. And then, you know we're introduced to the people of the sock and, and kind of explain to them what what we're doing and their initial reaction was, well we don't want anything to do with it right we were so busy with other things to do. So we don't care about this little corner which is the cloud native environment now granted that was a couple years ago. And I think as more and more meaningful business carrying workloads go into open shift right you know, I think IBM just said that they're not going to develop on anything that's not going. And if that if that's not an endorsement I don't know what is so so as as those workloads become more and more important. I think you're going to see a lot of participation by different functions of the organization but as of now. We still have those in instances where security development meeting for the first time trying to feel one another trying trying to understand how to work together. And, you know, it's scary, but it's also very hopeful because what we're seeing also is a lot of willingness to do that. There's a lot of willingness by the development sides to really take the vulnerability posture seriously to do what they need to do in order to fix it to the time and effort in order to fix it. And we're also seeing that the security organizations are becoming more reasonable you know understanding what is the real risk understanding how to deal with these environments. It's still a it's still a learning but but I think we're making good good headway around that. I got a question for you. What's next, meaning like, what do you guys have going on this year where we just started our Q2 actually our Q2 starts tomorrow. You guys have any events going on is everything going to be still be virtual for the rest of this year. What about what about Kubcon in LA is coming up in October you think we're going to you think we're going to be there in person once again. I honestly don't know we just had a prep call for Kubcon Europe in May, which is going to be completely virtual. And we have some presence there and and you know we have our virtual booth and and I think people are starting to get used to the idea of going virtual. I for one miss those those conferences because I think when you when you talk to people when you when you got those those off the hand conversations in different places waiting for the endless lines for the food. Those those really are very valuable. And you know as far as what's next I think, I think the adoption of cloud native technologies is going to continue. There's no reason for it to stop. I think workloads are going to become smaller and smaller and smaller, you know minimal operating systems are going to be kind of the thing that people go to which is great because the vulnerability posture is going to be cut by by a lot. I think that the the fact that you're going to run on hybrid environments where workloads can instantiate on your open shift on premise one day and then your open shift in Azure the other day. Those are those are going to be happening and we're going to need to manage how workloads communicate and how we reconcile the the instantiation of those workloads in different places so I think the industry has, you know has a lot of things to do yet where we're not resting and we're definitely going to in a while company it. There's a there's a question that I wanted to sneak in here about a half an hour ago but you guys were clearly not letting me get a word in edge wise. As containers get smaller and smaller and smaller and you know we start to really see true micro services pop up everywhere. And service mesh becoming more and more important for managing those. How does that affect the Aqua security story, if at all. Yeah, I think I think service mesh has kind of a dual role right it is primarily for level of service routing discovery and making sure that that workloads can find one another. On the other hand as part of the routing you can also you know enforce mutual TLS you can enforce authentication you can even decide that some workloads should not talk to two other workloads. So we're seeing service mesh as both an entity that needs to be managed because it is running as open source software you know STS of the world and so on. But it's also something that can be an enforcement point. And, and at Aqua we really don't have a problem kind of outsourcing the enforcement point into a mechanism that is already in the way, and can execute the required controls. So I think that the fact that service mesh is becoming more and more viable as a as a scalable solution is going to introduce a lot of capabilities for us. And actually, actually wallet wallets contradicting me he says in data science. MLDL containers are getting bigger. Sure. Not going to not going to disagree with you. Anyways, we are, we have 30 seconds left. You're, you're live on YouTube, you live on Facebook, you live on Twitch. Is there anything that you didn't cover that's going to prevent that phone call coming from your manager, your boss saying, I can't believe you were on the internet for an hour. Why didn't you say the phone? What would that be? I think I think I covered everything I would say, you know, think think broadly, you know, don't concentrate just on fun abilities, think cloud, think environment, think the workloads. Think holistically and ends to end. No, no single control is going to save you from a sophisticated attacker, but a series of controls might frustrate them enough so that they go elsewhere. Okay. Svi, it's been excellent having you on here. This is this has been terrific. I'm hoping that you can come back and join us again. We also have a podcast series. It's called the red hat X podcast series. It's on iTunes. Google play and like 41 other sites like Spotify and it's all over the place. Maybe you want to do a podcast with us here coming up in the next month or so. I think, I think this is a great topic. We could have some fun with it too. Yeah, I'll be happy to participate. It's been fun for me to to be here today and I really thank you. Okay, great. Well, thanks for thanks for making such an interesting. An interesting hour here for us on the open shift comments briefings the operator hours. We are signing off here because our my producer is is is chatting me up saying you're running over again. Mike, you gotta go. So, so Dave Muir and Svi. Thanks again for for being here today and God speed and stay safe out there. Thank you.