 Hello everybody and welcome again to yet another OpenShift Commons briefing and this time I'm really pleased to have Aqua with us. They are brand new members to the OpenShift Commons And as I always like to do is make new members introduce themselves and what they do in the ecosystem of OpenShift and Kubernetes and these guys and folks were at the KubeCon event in Berlin and I was pretty impressed with their offering and then they joined up to the Commons and so I was even more happy to have them as part of our community. So today Tevi, Stevie, Corrine is going to do the presentation. We also have one other person, Apesh Patel, who's in online as well with us and we're going to let them introduce themselves and give a presentation for about 30 minutes or so and we'll have some Q&A afterwards. So Sevi, take it away. All right. Hi everybody. Really excited to be to be presenting here to the community. You know, we've been in the Docker community for a while now as a company and I could actually jump right in a little bit about Aqua. So Aqua has been was founded about two years ago. Our mission statement is to make the use of containers be as secure as possible and actually provide organizations with the expertise and the tools that they need in order to not have security as a something that would prevent containers from rolling out. So not having security as a barrier for containers. Our product, what we sell and what we support has been around for a year, has been used in multiple environments, both big and small and and I'm happy to share some of the product details with you today. We're also very much engaged in the community. So we are an OpenShift community member. We are a private partner for OpenShift. You can find that information on the partner section in OpenShift. We do a lot of meetups. We do a lot of conferences and we have a lot of engagement with companies and also with individuals that want to advance the cause of containers and make sure that they are done securely in an environment. A little bit about myself. So I actually come from the security side. I've been in security for over 20 years now and my focus is compliance, security processes, procedures and in the last two and a half years or so I've been concentrating on securing DevOps specifically containers and doing a lot of advocacy for container security. And with me is Upesh Patel who is doing our community outreach and business development for Aqua and he was the one that was able to get us certified and on board on the community and make sure that we have the right relationships with the entire ecosystem. So what I'd like to do is jump a little bit into what we're doing today and that is talk about security for containers and it's a pretty big subject. Security for containers touches a lot of things. The way that containers touch a lot of things. It's a new paradigm in IT and it turns things a little bit on their head. The security that we've had so far around application development, the operations of application deployment and the networks really didn't have a lot of interface points and I show up, I show a few of the interface points firewalls and web application firewalls specifically are interfacing between the needs of the application, the needs of the network. The way that code is assessed and analyzed throughout the environment is interfacing within development and operations and of course there's the networking and operations for server config and patches and containers where it sits in the middle of it. There's a lot of capabilities in containers. They shorten the development life cycles and they do provide an opportunity to roll the same software in multiple places and the flexibility to move to the cloud in a very robust way. But for security that's the fact that it really sits at the heart of very different sometimes very disparate security practices requires attention and it has been a barrier to adoption because it does turn the security processes from being very much late in the security process, in the development process into something that needs to be thought in advance and there's needs to be a lot of collaboration between development ops and security. So this is where we sit today. Containers are changing a lot of the security process and IT process in general and we need to make sure that we follow up with them. The platform security and where OpenShift comes in is to provide the best secure platform for running containers from an operational point of view. So by controlling who can access the nodes, by controlling authorization around what application spaces can be started around what the different uses can do, the administrative rights around making sure that container resources are done properly, that the server is hardened upfront, that there is elements of intrusion detections and process separation is actually just a partial list of all the security benefits that OpenShift provides and it's really cut across both the operational and the network side. So what that allows us to do as people who want to run applications in containers is really not to think about the stuff that we used to think about, which is how do we harden an operating system, how do we make sure that we have consistency in the administrative rights, how do we make sure that once we put a server out there it doesn't really change over time and might introduce security risks that were not initially there. So OpenShift and admittedly also other container platforms, but OpenShift in general is very much security focused, doing a lot of good work for making sure that the defaults of Docker that may not be as secure are actually addressed and it really gives confidence to actually run the platform. But that's just one side, that's just the platform. The applications themselves are still the applications. Organizations still develop applications in containers and that code has nothing to do with Dockers or Red Hats or Kubernetes or anything that the configuration of the platform can address. The applications are still applications. And 80, 90% of the code that is going to run in a containerized environment is not going to be the platform, but it's actually going to be the payload that does the business purpose for the application. And in that respect, security still has some concerns. The prepackaged images, the things that go out of the development world and then being put on a particular server or particular node through a deployment is not well understood. There's not a lot of transparency in the way that applications are written in containers, especially the way that they are put together with the operating system. And that creates a lot of uneasiness to security operations because they just don't feel that they have control. And that feeling is because they really don't know where security fits in the process. Security has very, very good history in being at the junction where applications get put on servers and that everything can be assessed and the risk can be understood. But now that we're not building servers anymore, there's really nothing for security to hang on to. So that's a problem. Containers, even Kubernetes itself can provide a lot of operational views into containers, but not a lot of security views into containers. So the traditional security tools that are being used to view the environment, to assess the risk of the environment are not in use. And then because there's a lot of open source usage, some of it being assembled as a platform like OpenShift, some of it can be instantiated just as an open source component. That also creates an uneasiness for security. I'd like to be able to point a little bit about the process because in the pre-container world, you use to basically build your code and that's the top line there. You can do some static analysis, evaluate the code, make sure that coding practices are in place. And then you would compile a package or a component that can then be run on a server that was provisioned beforehand. So somebody would provision a server, configure all the necessary operating system. They would put Java on it if that's the case. They would put the application server on it if that's the case. Any type of middleware that is required, assess the risk, correct it, configure it properly, and then these two things get married and then deployed. And after that deployment is done, there is still an opportunity to then take a look at the configuration security and other security parameters for servers and then fix them while the server is in production. Containers work very differently. And in the container world, the coding and static analysis still should happen. We are talking about applications that are still written in programming languages that can benefit from static analysis. Codes still need to be compiled. But then when building an image, when doing a Docker build, this is where the risk assessment comes into play. It can't be done later in the process. Because once the container is instantiated, once the deployment is started, once pods are instantiated in the Kubernetes environment, it becomes almost impossible to patch them. And we really shouldn't because containers really should not be touched after they're rolled out. So what we're doing is we're now moving security from the very end of the process into the kind of beginning middle of the process. And that represents a change. It not represents a change that needs to have both the procedural and process and communication practices in an organization to support it. But it also requires some software solution that can provide the right of visibility into where the process is. So what are we actually doing about it? And what is Aqua doing about it? And why are we educating the market in order to embrace containers? Well, we want to answer the the needs of security people and make them feel comfortable with deployments of containers. So what do security people want? They want safe images from trusted sources, images that can't be tampered with, images that we know what the components are. Common security practices, like all the things that OpenShift can do, making sure that no out of band change can happen to the environment that there is right authorization, network segmentation to make sure that whatever sensitive data you have is still segmented, still segregated and safeguarded. Anything that we need to do in an environment container or not, especially if it's a regulated in scope environment needs to be audited so that we can find root cause analysis, but just provide the data for demonstrating compliance. A lot of the rules for security, and I know if you're coming from the dev upside team arbitrary, but that's the universe we live in. A lot of times you just need to provide data for compliance because you need to provide that data. So all of these have been developed over the years to support the server environment, the VM environment, and the same process actually happened where we moved from physical servers to virtual machines. All these questions need to be answered, and it took a little time for security people to come on board. And this is where we are with containers, but we want to accelerate that process and we want to make sure that we give security people and DevOps people as well the tools in order to adopt containers in a secure way. So there is an opportunity to actually change the way that security is done. Security could benefit from the shift left, which I hope I don't need to explain. This is where a lot of the processes are shifting towards the development side. We want to make sure that there is automation in place, so that whatever automation is done by the orchestration or anything around it, security can also benefit from. And we also want to be more preventative. There's a chance because containers are microservices to be a lot more proactive, a lot more defined in the way that we do our security. And the way that we actually insert that into the process is to start to look at the kind of classic phases of the container lifecycle and figure out what security processes and what security controls need to be done in each and every step. And at a very high level, and this is a very incomplete list, but at a very high level when you build a container, when you do a docker build, make sure that the code is done right, make sure that we have a good understanding of what goes into the image. It starts from the from line, really understanding what the base image is and then understand what other components will be built on it all the way to the payload of the application. As containers are promoted from environment to environment or also in the pipeline, we need to make sure that our assessment is still valid. So preventing any change or tampering with an image is something that will need to be done all the way up to the rollouts to the nodes. When we deploy to the nodes, the nodes themselves are operating system. They don't have as many components as operating system that basically carry payloads of complete applications, but there is still a lot of times an SSH service out there. There is still the ability to run operating system commands. The docker group is still valid even in a Kubernetes environment. So the access controls both to the physical node, but also to the docker engine itself are important. And then when containers run, we need to make sure that they are running in line with their business purpose. And this is where we can insert a lot more of the security control. So what I like to do is kind of dive into each of those areas from an application security point of view and mention how well we can execute them in a OpenShift Kubernetes environment, but also in a natural docker environment, the same things really, really need to happen. What OpenShift is helping us with is by providing a lot of the controls that docker doesn't provide by default and making them available. So the first thing after a actually before an image is built is to figure out where the image is coming from and how do we want to build it. So by selectively designing the build process of a container image, we need to make sure that the images are secure, that the base images are secure, that whatever we're building on is something that is not going to bring with it a set of vulnerabilities or set of configurations that is not in line with the best practices of the organization. We want to make sure that we register those images so that only approved base images can be used, scan vulnerabilities on the entire environment, share information between developments, sec and ops, and basically get a good visibility into the entire system. If we get good visibility, if we're transparent, then we have a much better chance of that environment being seen as secure and being seen as something that security feels comfortable with. So on that note, it really all starts with the development. So let's go into an environment and you've probably seen that before. This is Jenkins. Jenkins is a CI tool that basically drives the steps in which an application is done. And there is a way to with Aqua to provide a service that those processes, the CI processes or actually anybody else can ping and get an instant assessment of an image. So this is a build that was done in Jenkins. And this build basically starts with the pool of the base image. So this is the base image that we're pulling. We are then running a the Aqua client container called scanner CLI to do an assessment on the base image, make sure that it complies with policy and make sure that it's registered as something that is approved for use. We are now collecting all those packages that that was in the base image and we can provide information if the base image is okay to use. In this case, it is. We are then doing the build. So that's pretty normal. We're doing the build. We are copying some files. This is not a very robust build, but actually we're copying some files, installing engine X, putting in the engine X command and so on, building the server. I'm actually pushing it into the registry and then we're going to do an assessment on it. And you can do the reverse. You can also do the assessment first and then decide if you want to push it through the registry. This is just for the purposes of my demo because I need to use that image later. So that image assessment is now taking place on both the base image and the entire stack. And we're getting a response. And the response is unfortunately that this image is disallowed. And we're also saying that we're going to prevent this image from running, which is something that we need to remember because we're going to show how that's done. So right now we have a build. That build failed for a security purpose. Okay, why? Right? Is it a developer or an office person? You can ask yourself. And we are providing that answer. So we are providing an output that is going to tell you exactly why this failed. This is a CVE, a block TV that is too high for the purposes of the policy. And therefore we want to make sure that if we can avoid using this component or or maybe change this component or update it to a new version, this vulnerability is not going to be there. And then the image will be allowed to continue. And that's exactly what I did. So my version two of that image is something that has, now I'm going to go to the build, something that has more or less the same thing. So getting the base image scanning the base image and so on. And then building the image and now we're getting success. So this is not the end of the build usually because there are other things that need to be done. But for our purposes, this is where the build is finished successfully. And what we're doing is we're still providing that level of vulnerability, but right now we see that their scores are pretty low and there's nothing there that violates the policy and therefore the images is allowed to continue. Maybe somebody decided at one point to to do a new build or experiment with a building the same environment or the same application on a different base operating system. So in this case, I'm building it on Debian and not Alpine. Debian of course has a different way of doing application. It comes with some vulnerability sets and so on. But for my organization in this demo, Debian is really not assessed, right? I don't know what that is. Security hasn't really evaluated it. It's really not authorized to be used as a base image. And even though the build carries on, we're still disallowing the image. We're disallowing the image not because of a vulnerability, but because it wasn't built on a trusted base. So even though it has the same amount of vulnerability as the image that was allowed to proceed in version two, version three may not have a much bigger vulnerability posture, but it is something that security hasn't assessed yet. So we're not only dealing with vulnerabilities here, we're also dealing with the configuration and the base image and assessing whether or not that is acceptable for the organization. What I'd like to do is take a look at the Aqua side of things. So this was the CI side of things in the Aqua side of things. We have our demo application here. Let me just sort it by alphabetically. And we have the same information in the Aqua UI. So demo number one is disallowed because of a CVE. Two is okay. And three is disallowed because it's not using an approved base image. All of these actually corresponds to an image assurance policy that can have many parameters. But the ones that I chose to use in this demo is the fact that there are some vulnerabilities that should be blocked and that these are the approved base images that can be used in my environment. By the way, if I wanted to, if I, if I, I'd build it on Debian with, if I use Debian Jesse, that build would have been successful because that is an approved base image, right? So sometimes it's not really the type of the operating system, but security may be a little bit behind in assessing the compatibility of newer versions of the OS as a, as a good base. And that just takes time because security needs some time to do their work. That's something that organizations will need to negotiate and make sure that we have the right versions in place. So this is the, the, the, the image policy. And that's why we have the, some of the tags available to use and some or not. Another thing that, that we're providing with Aqua is, is some information that ops and security might want to know about the image as a whole. So it's not just that the vulnerability posture is here and we can take a look and we'll see them a little bit, the, the vulnerabilities and what their, and what information we get on them. But just the packages that are deployed in that image, some of the metadata about this image. So if you remember, you know, we were using NGINX. So just knowing what the command is, is probably going to put a lot of people at ease because once you instantiate an image, you really don't know what it's going to run. So that's, that's the, the, the one thing that we can provide security and make sure that they have the right authorization for. As far as the vulnerability posture, there needs to be a lot of information passed along back to development in order to know what to do. And, and, and in Aqua, we are doing a lot of work to provide good understanding of what the vulnerability is and what might be the impact. And you can see that we have several types of vulnerabilities, actually CV is just one of them. So we are using white source as, as our, one of our sources for code that, that is not really operating system based, but really some package based. So, and we can give you some fixed solutions and advice on how to address them. We are correlating the Red Hat vulnerabilities with the NVD vulnerabilities. So if you know how to read the NVD, those are the common vulnerabilities that have been kind of pushed up to the national database, even though Red Hat may have a different view of them. And again, we're providing description, fixed version, and the solution advice, and we can link back to the NVD and also to Red Hat so that every time that you develop on a Red Hat or CentOS based image, you have the ability to get a good understanding of what not only is, is the, the NVD thing of the vulnerability, but whether or not Red Hat agrees. And, and Red Hat is actually doing a lot of good work in order to assess the security of, of the tools here. So here we have another vulnerabilities where there's correlation between, between Red Hat and, and the CVs. Something to, to kind of make a note is that sometimes there are vulnerabilities that there's no agreement on, and there's not a lot of agreement in the vulnerability space sometimes. So what we have at AQUA is we've done the work to address some of those things that are negligible and may not be something that you need to worry about. And in this case, the, if you scanned it with, with basically any, any type of vulnerability scanner, this will go to, is going to pop up on this version of BASH. But, you know, Red Hat no longer considers this to be a security issue because there are maybe other factors that, that mitigate it. So that is something that as a development organization, or as ops, you may not know the exact impact of a particular vulnerability. That's why we need to have collaboration between DevOps and security to just make sure that we, we are on the same page. But we are arming you with a lot of information in order to make the argument that if you do need to use a component, then, then you can use a component and you can allow to, and you can allow it to run in your environment. So at the end of the day, we want our, our images to be secure. So we want to get an agreement that an image is, is okay to run. We can put those rules in the CI. We have the information that allows us to then collaborate with security and make that determination. But at the end of the day, the image is going to be uploaded to a registry and then should be available for use. And then what we're doing, we are going into the second phase, which is we also want to make our Docker engines and our, and Opershift nodes only accept those known images, images that have been assessed and images that have the security posture is known, approve them based on the risk and also maintain the integrity of the image so that if somebody changes it or the name changes, we can detect that and make sure that the image is, is, is proof for use before we run it. So we have those three demo images. Let's kind of run through that pretty quickly. So what we can do is we can pull our demo 1.0 image and try to run a container from it and, and we get the message that, you know, we don't have permission to execute it because this is an unauthorized image. So, so demo 1.0, as you remember, is this allowed and the message was, you know, we won't be able to run it. And that's, that's how we won't be able to run it. And that's because Aqua is safeguarding the Docker engine on the particular node and doesn't allow that image to, to instantiate. We can't swap out the image name, right? Docker tag still works, even, even in a Kubernetes environment. And, and as far as the Docker engine is concerned, you know, we have, you know, the 1.0 version and the 2.0 version. But if you try to run that 2.0 version, it still tells us that this is an unauthorized image. And the reason why we know that is in the course of assessing the security of an image and getting all the vulnerability information, what Aqua is doing is actually doing a hash of the file system in all the individual file and then a complete digest of that image. And what we're doing is we are providing a state for each of those images. So a state can be either registered, it can be unregistered, or it can be an invalid digest. And in this case, we have a, our 1.0 image is registered, it's the same image, but it's blocked because it's disallowed. My 2.0 image actually very, very iffy, right? Because the server digest is one thing we've looked at it before. But locally, it doesn't look like what we expect. And therefore, we would block the running of it. The only way really to run this image is to pull it again. And that's the 2.0 image. And then we get our hello world. And then that image becomes registered. So it's not only the ability to understand what the risk posture of an image in the pipeline, but we also need to have our engines, our nodes, only accept images and deployments that are allowed to use. And we have the ability to then stop images that are not allowed to use or that are different than what we expect them to. One of the things that Aqua provides as a compliance view is that we can do the same view across all the nodes. And this isn't my host images view. And you can see that there are, you know, a few of them are not like the others, right? Some of them are not registered, some of them are not registered and don't comply with policy. This is really not what we want to see. We want to see something that is more in line with the rest of my server environment. You know, my number three server is probably the best one because I didn't put anything unauthorized on it. This is really what we want to see. We want to see that everything is registered and everything complies with the policy. The same view, by the way, can be done on containers. So on every running container, we have the ability to see whether or not it complies the policy or it comes from a registered image. So this is the phase where we're able to basically say that everything that we deploy, everything that we ship to a particular node is something that A, we expect and B has passed the organization security policy, which would make everybody a lot more comfortable in what can run in the environment. In addition to that, the deployment can be done a lot of times with automation in a Kubernetes environment. It really should be done with automation. But we want to separate automation from human actions. So not allowing access to the node, controlling who can again privilege if you do allow access to the node. Make sure that we limit what permissions, volumes, networks can be used in the course of a deployment and basically give you a lot of an audit trail. And that's another thing that Aqua can do because our ability is to audit everything that happens in the Docker engine itself. So this is the audit log from the last few minutes. And you can see that I had a block because my demo too was disallowed. Everything else is successful. So another thing, if you are working in a Kubernetes environment, you expect the user context to be the user context of Kubernetes to do all the heavy lifting around changing and starting stopping containers. So it's really easy to identify if there is a user, a named user that is doing any of that work. And that of course can be funneled into a log aggregator or a security information management platform and then takes from there and correlated with the rest of the environment. So preventative controls are great. The detective controls, the ability to see the logs is also very, very useful because it provides us with the ability to then prove that our controls are working. And then we get to running containers themselves. So we've seen that the images are good enough to run because we've done a security assessment on them. We know that they haven't changed. We know that they are run by authorized services or authorized people. But when a container is running, there is a lot of things that the container still can do, right? It's still running software. It's running software that's a lot of times exposed to incoming traffic. And we want to make sure that our container is run really according to plan. So there's a few things that we want to make sure. We want to make sure that the user context, that we run a container is the right user context, that the executables are in line with what the container should do. We want to prevent any significant drift between the container and the image because we really should not be patching or doing any changes to a container. And then we just want to understand what the usage data is. And that's not usage around resources, but really who's using what and whether or not there is any misuse of the container, basically an attempt to co-opt it to do something that it's not supposed to do. So understanding all that, Aqua is really providing those controls while a container is running. And one of the first things that we want to do is just kind of get a baseline on what a container is doing. So if we take a look at our list of images here, I have WordPress as one of the images. And if I just start a WordPress instance, one of the things that happened is that this WordPress instance, we are starting to get some telemetry from. So we are getting some usage information from it. We are getting all the executables that it was run, that were run. And you can see that WordPress is a pretty heavy application. It does contain a lot of processing and web server. But even that, there may be 22 or something or 12 processes that need to run. We can understand what the networking requirements are, if there are any environment variables that were passed along to the container and what user accounts it's running on. This is actually not a very good image because it's running on the root. We want to make sure that that doesn't happen in production. But let's take a look at the resources right now. So now we have an idea of when WordPress is running, really what it needs. Now let's take a look at the image itself. The image itself, if we do let's say a Docker exec into it even as root and go into our container, we can see that there's quite a lot of executables there. It's actually not as bad as some of the other full operating system, but it's more than the application needs. So one of the things that we need to ask ourselves is do we want to run every single executable just because it's in the image? But another very important question is do we want to allow those executables to change? So now I'm root inside of a container. There's namespaces here, so I'm not root on the host. I really can't do a lot of damage on the host, but I can do quite a few damage to the container. I can take LN for instance, which is a pretty innocent command, but I can do like a copy. Let's copy that to another executable or maybe download an executable or do something that is going to add some code to the container that was not found in the image and then let's try to run it. Well, we can't run it and the reason why we can't run it is that one of the fundamental policies in Aqua is to prevent drift between the container and its image. So one of the ways to allow for a more free usage of containers is to limit what can be done to them once they're instantiated. So even if I change a LN itself, let's copy ping to a LN for instance. So we could use a LN while it was the right command. Now we can't use it. So all of these things are designed to make sure that if a container is running and instantiated, there's very little that it can be exploited by that container and that's called drift prevention. This is actually what allows us to then go back to our list of running containers and do some search. Let's search for a vulnerability. We're able to search throughout the environment for a particular vulnerability and get all the containers that run from it. This is an instant impact analysis. We can go into the containers and look for a particular package or an executable and look at all of those that run Python. And this is relying on the fact that we know that the container is based on an image. We've already pre-assessed the image. We know the security posture of the image. We know the components of the image. And we're also preventing any addition of executables that can be run from the internet or from anywhere else or even something that was written as a script inside of the container. So the idea is that in order to maintain that assurance that images are known, only known images can run, only containers can run from known images and they cannot change from the known images allows us to have that assurance and to end that the images that we are running are indeed the ones that we expect and we can actually provide that search capabilities between them. Once that happens, we can then again go back to the list of executables and all the other resources that we found in the container. And I'm switching from WordPress to JBoss now because it's a little bit of a shorter list and we can actually make it into a more complete profile. So this is a profile that is based on JBoss and we can see that the files for it are actually just seven executables that it needs to run. We can add a lot more things to the profile. We can add whether or not the container needs networking at all. If we can add read-only files, if there's a configuration file in the container that should not change, we can control the executable list. We can control the user context in the container, right? WordPress, run as root to JBoss is actually a little bit better. There is a user ID that is specific to that. This is the parameter that tells us not to run executables in the original image. We are preventing running with privileges. So this is in the case that a container instantiates a spillage outside of the OpenShift controls. And then we can add things like a second profile, which can be a little bit more granular than SC Linux and can run per repository. We can drop capabilities, control the volumes for the container, control the limits, again, in a way that is complementary to what the specifics are in the deployment for that particular container. And we're getting to a situation where a particular application can now be even more limited in how it can run. And the way that we do that is that if we run our application and we're going to supply some parameters to it and we're going to give it a name, give it a port, and so on. So it's a little bit more in line with how a deployment would look like. But then let's try the same thing. Let's try to do a Docker exec and try to do basically what we did with WordPress. So this is my app and I want to run BashWit. And I'm getting permission denied off the bat. And that's because root is not an authorized user for this container. Root is something that should be reserved to basically nothing. Really, a container should not be run at all with root. If we eliminate that, what we're doing is we are doing a failover to the user that is defined in the image and that's the JBoss user. So now we are in the container. But you can see there's a bunch of permission denied there. And that's because anything else, ping, ls, ps, curl is really not, really shouldn't run, right? Even though it's available inside of the operating system, it really shouldn't run. And the reason why it shouldn't run is because it's not in line with what this container is supposed to do. It's not in the list of authorized executables. In this case, this image is not well built. If we add or let's say that we just move this to an audit only and we save this, now we do have ls and the bin directory is just huge. There's a lot of stuff in there. So something that I probably should have mentioned in the beginning is that size really does matter. The smaller images usually have less of an attack vector. This has a huge attack vector. Anything here can either have vulnerability and can be exploited. So if you're using big images, which sometimes there's a reason to, limiting the processes that can run to something that is only required for that image is going to greatly improve the security posture of the entire environment. So let's go back and reapply my policy here. And let's take a look at how we're actually passing parameters to a container. So secrets management is something that is talked about a lot. In the last DockerCon, I must have had 50 conversations on secrets management alone. And there are some organizational issues with using the built in secrets management inside of the orchestration tools. Not that they're not good, it's just that they don't play nice with the rest of the environment. So as an alternative to that, what Aqua is doing is we're using our capabilities to interface into containers to provide some aspect of secrets distribution. And the way that we do that is that we have our ticket distribution on the front end, as far as the nodes are concerned, and our back end can be a variety of integrations. So it could be a hashical vault, it could be the Amazon key management services, Azure key management services, or maybe even our own internal database that can provide those secrets. And they really live outside of the orchestration because security sometimes wants to control them, right? Orchestration may be a little bit too controlled by ops. But if you run our application, what we're seeing here is that we have our user here, which really should not be encrypted. We have a token here, which maybe should not be encrypted. And we have something that is coming from a vault somewhere. So if we do basically a Docker inspect on my application, and let's do grep so we didn't get a big block here, you can see that the unencrypted is still unencrypted, that whatever is tokenized is still tokenized. And my token here is encrypted and we really can't see it anywhere outside of the container. So where is this secret coming from? This secret is coming from, in this case, the Aqua internal database. It could come from hashical vault. It could come from the key management stores. And first of all, we get tracking of which container is using it. So that's really important because if that leaks out, you want to know the impact. And we can actually change the value. So we can put here something, you know, open shift and save the change. And that gets transmitted directly to the container. So if we go into the container and take a look using an exact command at the environment variables, we are getting that value that is being decrypted and handed off to the container. So that secret management is something that can be done external to the orchestration. It can be under the control of security and not ops. You can mix and match. So if there are secrets that are very, very operational in nature and you still want to keep them in Kubernetes and make sure that they're part of the system, you can still use them. If there are secrets that are more corporate that need to be put into a separate vault or there needs to be rotation on them that is more automated, then you have the ability to do it with Aqua where Aqua is actually distributing that secrets to the environment. And everything that we've done so far is also audited. So we audited the fact that we've changed the secrets. So that's important. That's something that security really needs to know is whenever secrets are rotated. We've seen all the Docker inspect and start commands. We've did a lot of detection of the file execution. So this is where LS was was permitted. This is where all the file execution inside of the containers were blocked. And you can see the difference between the detect and the block. We're getting the actually original users. So this is me logging on to the container to the host, even though I did a lot of work inside of the container. So that's something that usually the compliance organization is interested in to figure out who did what in the environment. But we're getting a lot of information. And really at the core of it is the notion that we have the opportunity to do better security with containers. So if you have security people that are resistant, please tell them that containers are actually good for security. And somebody who's been doing security for 20 years is saying that. And that is that we can do all of these capabilities. I'm not going to read them out loud. But these are the things that are important for security. And these are the things that we can execute in a containerized environment, which we should provide us with the ability to run meaningful workloads in scope applications and applications that are subject to regulations based on the security needs of the organization. So I hope this was beneficial. Please visit aquasite.com. There's a lot of information there. You can read some of our white papers. You can look at the OpenShift partner page to take a look at some of the specific things that we do around OpenShift. But I'm going to open it up for questions if there are any. All right. Well, thank you very much, Sevi. This has been, it's been really, I got the demo when we were at KubeCon, but I didn't go into this much depth. So I really appreciate it. It's great stuff that you guys are doing. There have been a couple of questions. And I think what might be the easiest thing to do is if I unmute Peter, who's been asking most of those questions, Peter, you're unmuted now. Why don't you go ahead, Peter, and try and frame your question. And that should kick off some conversation. Sure. It's basically, I'm trying to figure out the scope of the scanning. A lot of the questions we get is about compliance. And that's the concern. It's not just the CVEs, but it's also, is the server actually allowable to go into production? Hence, is it following government standards for what it was security standards you have to apply? So that means that the scanner, when we do an OpenSCAT scanner, for instance, is mostly interested in the configuration of the components that are running, not just that they're running on the right and the latest binaries and all that, but that they might not allow bad things. Examples could be allowing root logins on SSH or Xiaowan or even 256 for encryption and stuff that are all not allowed by, once you get up to just even a minimal level of security. So I was just wondering if those kinds of files included anything from JBas to the Apache and so on, part of what you're evaluating are those configuration files? Yeah. So we are evaluating the makeup of the image in addition to the file. So I didn't really go into the policy, but one of the things that you can do actually, and you said that, is identify images that are, where is that control, right here, identify images that are defaulting to run as root, right? So that's equivalent of the, is root open to SSH on a server. The configuration requirements are going to be a little bit different because containers really should not have direct access to them, but you're absolutely correct that the, both the vulnerability and the configuration really needs to follow the guidelines of your server environment. So we're actually using this as a preview in the course of the next couple of months. We're going to release a version that has the ability to run code snippets and script snippets to assess the security of certain elements of the operating system, and also SCAP support. If you're using that, if you know how to use that language to provide a structure that can be, that we can assess the image against. And we definitely recognize the fact that vulnerabilities are not the end-all be-all of the security posture, but the actually configuration of the image, whether or not an SSH key was left in there, whether or not your web server is running under SSL, all these things are important questions, and we are adding those to the image assessment part of the AquaTool. Does that answer your question there, Peter? I think he's muted himself again. Yeah, I try to be nice and not make any noise. Yes, that answers my question. Okay, perfect. And it gives me a perfect way to segue, and we're going to have to have you guys back again when you get that next release out with those new functionalities to show off that as well. So keep us posted. We're almost to the end of the hour, and I don't see any other questions. I'm wondering if you have anything else you want to add, Sevi, or if Apesh wants to butt in and ask any questions at all? I think most of the questions got answered in the Q&A, and they were questions that preceded things that you then explained in the slides and in the demo, so I think we did a pretty good job of covering everything off today. I had a quick question. So did you guys say that you are looking into doing the scanning using SCAP and SCAP profiles? That's in the roadmap. Is that what I understood from the end, that end question? Yeah, so the intention is to have a feature where you can either have a small shell script that checks for something or an SCAP expression that can check for that thing. All right, and thanks for asking, Lucy. Anyone else have any questions? If not, you can always check these folks out on aquasect.com or jump on Slack. There's a couple of them that are in the Slack channel for OpenShift Commons or hit them up on the mailing list as well. So again, Sevi and Apesh, and I think Raini was on there as well answering questions. Thanks very much, and we're really pleased to have you guys as members of the OpenShift Commons and look forward to hearing more from you in the future. Yeah, thanks very much, everybody.