 Good morning everybody, and welcome to yet another OpenShift Commons. Thank you all for joining us again. We're sort of back on our weekly track of one a week and we're happy today to have Black Duck coming in and talking about vulnerability management and the hows and whys of that and just how to deal with that in the context of OpenShift and really just to give us some background on the whole topic as well. We have with us Tim Mackie, McKay, Mackie, and he's going to give us a bit of an overview and tell us the Black Duck perspective on all of these things. There's an opportunity to talk after he gets his presentation and do a Q&A. While he's talking, please ask your questions in the chat. There's a number of other folks on who may be able to answer them and if there's some really great questions, I may repeat them and get them verbally answered for the recording of this and after this session is done, it'll be posted as per usual in the OpenShift Commons YouTube channel and on the blog for OpenShift itself at blog.openshift.com. So without any further ado, Tim, why don't you introduce yourself and the topic and we'll just go on and get into it. Cool, thank you, Diane. So today what we're going to talk about is the how and why of container vulnerability management and a little bit about myself and who I'm all about. My current role here at Black Duck is as senior technical evangelist. I do still code occasionally. My current favorite language is actually Go. For the last 10 years, I was part of the Citrix open source business office and was the community manager within that. So things related to virtualization and cloud and container management and relatively large scale things in the world as then have been kind of part and parcel of my existence for that time. Some of the cool things that I've done in the past have actually been, I've listed them out on the screen. If you want to kind of follow my various ramblings on Twitter, I'm Tim and Tech. The decks that I present, all the presentations that I do, they're up in my slide share and this deck will actually be up there probably later this afternoon. And if you want to follow me up on LinkedIn, that link is also up in the slide right now. So before we get into any of the real details of vulnerability management, we actually have to understand the attacker model. vulnerability management fundamentally is a connection to data breach management. The Verizon data breach report from 2016 was quite telling in its 50 something pages in length. 89% of the data breaches that they had within their infrastructure, their customer base had some form of financial or espionage motive behind them. This was very much a case of people trying to go and do a couple of distinct things. You've seen talk about ransomware, you've seen talk about credit card access, you've seen talk about activities that are all about what I can do with personal identifiable information and that's what they saw a significant portion of it. On the espionage side of things, that was state sponsored actors who were out to do malice on people that's also corporate entities that are trying to go and find out what the next big thing is from somebody else. When they distilled all the data behind that, one of the things that they found out was that the vast majority of the costs associated with doing data breach management fell into the legal side and the forensic side. So the legal side really was a case of gee whiz, but I need to hire a bunch of lawyers to go and tell me how bad this is going to be for me from a reputational perspective from a data management perspective, from a you name it, get the lawyers involved and have them give some opinions and the forensics for exactly how bad was this, how much of the data got accessed, whiz data, how did they get in, what were they doing and those were the things that really did dominate the remediation expenses associated with data breach and this is like a simple distillation of that massive report. I encourage everyone to go and read it in detail, it's definitely well worth the read. Now, if I look at my engineering background, we've done a lot of work around software development life cycles and threat modeling and whatnot and we try and put our self in the position of who's going to actually perform the attack, but in reality it's the attackers who decide what's valuable and so this little snippet here was actually from a little over a year ago and it impacted the police department of the town next to the one that I live in. They got hit by one of those lovely ransomware people and a request was made for roughly $500 in Bitcoin to basically unlock the information. Now, if I was an attacker and I managed to gain access to some quantity of police data, I've effectively got three possible scenarios available to me. Number one, I can decide that maybe it's not my best interest to kind of upset the law enforcement people out there, they might come after me so kind of back out, no harm, no foul, make like it never happened. Option number two is to just kind of stay the course and go for whatever it was that I had and in this case, the ransomware locked down all of the evidence information for the various cases that they were working on and so number three might be, well, I want more. In this case, the ransomware people weren't that sophisticated. They decided to kind of stay the course and they went for their $500 and the police department ultimately paid up and then started to go on a promotional campaign to say, look, if this can happen to us, this can happen to you and these are some of the things that you need to be aware of in terms of how to minimize your potential for attack and they've consistently been doing that over the course of the last year or so. One of the other things that the Verizon Data Breach Report saw and SAP's own research supports is that a very large percentage of all of the cyber attacks themselves are actually happening on the application layer. It's those applications that are deployed that are valuable, not necessarily all of the stuff in the middle, but the interesting thing is that from a purely investment and strategy perspective, we see much more energy put on network defenses, perimeter defenses, new IDSs, IPSs, new firewalls, more aggressive application firewalls, new techniques to block certain things. But that's out of alignment with how those attacks are actually progressing. And it also has an implicit assumption that the world of technology is data center-centric. And what we have seen over the course of say the last 10 years or so is a movement from that which is web-based to things that are SaaS to mobile through cloud and now IoT. And so these models don't necessarily track with what the attackers are thinking about. And so no security talk would ever be complete without the requisite bad guy in a hood iso. Here he is, meet Bob. Bob is going to go and create an attack. And here's the methodology that he actually goes through. He's going to theorize against a variety of potential scenarios that might gain him value from the thing that he's attacking. He's going to test it against a variety of platforms. And if he's like any other software developer, chances are the first one's not going to work so well. So he needs to iterate a couple of things, have a few new ideas, and eventually he's going to create something that is going to perform the type of attack that he wants to have happen. Now it's all well and good for Bob to go and have his own attack model, but he wants to be known for more than just his own buddies. So he's going to create a deployment package so that everyone can go and use it. And that deployment package has got to be documented well. So he's going to create a few YouTube videos that are going to give full documentation on exactly how to go about using this. And well, the last piece and if he can get it just right, he'll come up with some model for promoting this, because the press do very much love a catchy vulnerability name re-heartily shell shock and company. And so this is essentially the model that an attack goes through. And if this looks an awful lot like a software organization developing a new product, well it kind of is. And the point here being that the attackers themselves are significantly more sophisticated and significantly more creative than the attacks we might have seen a few years ago. One of the things that if we're operating data centers we need to really care about is understanding what the scope of compromise might be. So in the left hand slide we have our shiny happy people and our shiny happy people are the users of our application. And like every good data center operator we've got some firewalls in place to make certain that the shiny happy people only see the things that they're supposed to see. Now those might be basic load balancers, those might have some content awareness, those might be doing a lot of caching and compression and doing all kinds of other fun things, but they're fundamentally a firewall protection device. And you have a stack of servers that we're going to deliver our applications from. And I've highlighted one of those servers and that server is going to be running a virtualized environment so it's got some form of hypervisor. It's got some form of control entity in it and inside that control entity is a virtual network switch that you can program up to have whatever level of stated rules that make sense for policy management for rounding for monitoring and there's going to be some sort of security service in place in this environment and this is going to be replicated across each of the hosts. And into this we want to deploy some containers. And so we're going to do that within container VM and we're going to choose some minimal operating system like Red Hat Atomic. And we're going to have our various containers that will be deployed in their respective pods and we're going to replicate this across our hypervisor so we end up with NVMs. And the interesting thing happens is if I'm attacking an application I have figured out a way to attack that application inside this highly protected firewall off network secured environment. And so if I have effectively gained control of that one container I'm on the opposite side of all of those defenses and all of the arms race acceleration that's happening on the network firewall. All the investments happening at the perimeter doesn't help me if I've already got the Trojan inside the walls. And so that's why vulnerability management is so critical because the attacker is already inside the environment when the time comes to mount whatever attack they want to mount. And so with that as a backdrop let's look at how a vulnerability can be exploited. So first and foremost we need to look at who's responsible for code and security. If we're in a commercial environment and it's a closed source world or if it's a highly managed commercial software operation there will be periodic security updates in the world of Microsoft that's become known as Patch Tuesday. And that environment has a dedicated support team with their own SLA. They've got their own dedicated security researchers who are focused on their products and the exploitability of their products and the things that are built out of them. They have all of the notification infrastructure in place to make this work in a very seamless fashion. On the flip side if we have open source components it's community-based code analysis and if you want to be truly aware of what's in your application it's up to you to be doing a lot of your own monitoring and ultimately you're responsible. So if I take a look at this MediaWiki announcement this is what it looks like. So an announcement was made in December for a new update. The update included some security issues that were made to various special pages that resulted in fatal errors which is very specific. They also made an announcement in that same post that this version marked the end of the support for the 1.24 series and they also highlighted an end of life and they backported a bunch of fixes because they thought it was fair to fix them. Now if I'm dependent upon MediaWiki for my environment I have to be monitoring for this and I have to translate all of the various notifications that are going to come out of the engineering team to determine whether or not this is a fix that I should be applying. Now various special pages I would have to dig into it and the onus really is on me to be able to to understand what's going on. If we flip it into something that's a little bit more fundamental and not just something like a wiki service let's take a look at the G-Lib C vulnerability from earlier this year CVE 2015 7547. The bug associated with this was reported in July of 2015 and this is what the bug report looked like and there were a couple of things that highlighted that there was buffer overflows and buffer mismanagement that we would really want to be aware of and take account of. A little over nine months later again on the list a CVE was assigned to this and that assignment was our CVE 2015 7547 and this assignment occurred on the 16th of February and included information that the bug was introduced in G-Lib C2.9. So from July of 2015 through to February of 2016 there was an increasing risk because there was some awareness that was out there that a buffer overflow was present. Normal engineering conversations occurred on the list that go and determine the scope and the issues behind it and a conclusion was made that in fact this was a security related issue and it was disclosed and the bug itself went all the way back to May of 2008. What's interesting is that that disclosure occurred on list and it wasn't until two days later that the bug itself and the CVE itself was disclosed through the National Vulnerability Database. So this is MITRE.org's representation. There are a couple of other ways of getting access to this information but this is typically what people think of when they think of a vulnerability disclosure. There's a CVE that's been put out there and there's some guidance about what to do about it. If I read the overview in here it's talking an awful lot about DNS management but it doesn't give me any prescriptive guidance as to what to do next. There was a two day lag between that NIST disclosure and what was on the list so people in the know could have been crafting some form of attack. So if we look back at Bob, Bob's ears just kind of picked up when he saw that and he started doing what he does best. And of course if you're running an environment the fact that this is a disclosure that's cool. Well maybe not but that's cool. Ultimately you just want to fix it. And so I did a search for commercial software and that vulnerability that CVE and the first thing that came up on the list was a update from VMware. And so in the VMware case they were vulnerable to this. They disclosed that an update existed for ESXi 5.5 on the 22nd of February and the day later they posted an update for ESXi 6.0. And so normal engineering we need to determine the scope of the impact of this thing and we need to go and figure out what the appropriate fixes for us. We need to test it so a couple days is perfectly rational. And then about six weeks later they released new versions of the vCenter server appliance which is their management console which was also impacted by this. And so patches became available. You figure out where it is in your infrastructure, you fixed it and now you're back to normal for one vulnerability. If I look at the actual details behind this vulnerability and where its impact was. We within BlackDuck have a variety of data feeds. In the case of this as of about a month and a half ago there were 197 reports of this, 87 of which were vendor specific. But if you look about a third of the way down the list you can see this lovely generic exploit URL. That goes to a GitHub repository that contains code for how to exploit this specific vulnerability in the real world. And you can read through all of the various entries in this through our UI. The problem is this is one vulnerability. And the trend has not been to slow down in terms of security disclosures and security vulnerability reports. And what we actually see is through our own research and data feeds that there is a large percentage of vulnerability information that doesn't quite make it to NIST for whatever reason. Maybe it's in the opinion of a developer that this doesn't quite qualify. But there's enough chatter about it that this probably is a vulnerability whose value will be realized at some point far down the road. It may be that it's in the process of becoming a CVE and has been disclosed within some of the teams and is partially publicly aware. Things of that nature do shift this around. But the quantity isn't changing. So if I flip this around and look at this from a container perspective, production use of containers is something that we have heard consistently throughout 2016. There's been no end of surveys. And this is the one from the new stack showing that production use of containers and containerized workloads is the norm today. We can very much see that within Docker surveys, cluster HQ has done it. But most recently, Datadog has put up their own survey and their own analysis showing in the last year amongst their roughly 10,000 customers strong user strong world that there's been a 30% gross growth in the adoption of containers for their monitoring services. So that's that's a huge growth. And even though the attack angle might be kind of slowing down a little bit, 30% is something that's going to draw on anybody's attention. So with all of that as the backdrop, I want to take a look at how to secure the container contents and the environment itself. So let's start with limiting that scope of compromise. So first and foremost, we want to be in a position where we're enabling the Linux security modules. There is no meaningfully good reason why we wouldn't be running with SE Linux today. We also want to think about things related to kernel security profiles and address load, randomization and access controls of every shape and form. We also want to take a look at exactly what the kernel capabilities that our containers need are. So within Docker, there's a subset of about a dozen kernel capabilities that are enabled by default. You can reduce it if you don't need them. You can also add additional capabilities if you do need them. But if you go down the path of adding capabilities, the one that you desperately want to avoid at all costs is cap underscore sys underscore admin. Because for practical purposes, that's, hey guys, I need root. And trust me, I'm good and I'm wonderful. And you probably don't really need root and you've probably opened yourself up to a world of hurt if you turn that one on. You do want to use a minimal Linux host OS. So Red Hat Enterprise Linux atomic host seven is a perfect solution for that. And lastly, you want to make certain that when you're setting up your environment that you are setting up specific shares and memory access so that in the event that an environment does get compromised and they become the noisy neighbor and try and consume all the resources on a host that you have contained their ability to damage just by being noisy and starving the legitimate processes. So I mentioned using a minimal operating system and that minimal operating system that we're going to explore a little bit today is atomic. And so atomic comes from upstream project atomic. And when it's in the Red Hat Enterprise Linux atomic host version, that's really an optimized rel seven variant that was designed for use with Docker. SE Linux is already enabled. It is using RPM OS tree to do its updates. So no yum. And the benefit of that is that you do get upgrade and rollback capabilities for any updates that are involved. So that's a huge plus. It's also pre installed the Docker and Kubernetes. So you get the all the base framework that you want in an open shift environment. Applications are designed to find using atomic app and atomic nuclear. Nuclear is fundamentally a model for creating multi container application. And so that's going to satisfy the requirements of a full pod definition in a Kubernetes and open shift world. So support for Docker Kubernetes, open shift and mesos. And the open shift artifacts that are created through atomic app and nuclear can run either natively through OC new app or via the atomic provider, interfacing with the open shift. APIs to go and create the pods and associated containers within it. One of the key benefits is that it actually provides a security compliance scan capability through the action verb atomic scan. And so it's that that we want to look at when we're talking about who do we trust. So one of the biggest problems that exists within the world of containers today is that anybody can create a container and anybody can post a container anywhere and you don't necessarily know exactly what's inside it. You can kind of peel it back and you can look at some of the elements, but do you really have a trust issue or do you have implicit trust? So Red Hat curates its own registry and there are valued containers there. You can also create some with the atomic app. Those will also be present within the registry. You can create your own private registry. You can pull things down from Docker hub or Docker cloud and you can consume containers that come from somebody else. One of the benefits of containerization is that you can create layers of applications within a single Docker image and then move them forward as your application grows and evolves. But the last question to ask is if these are validated and curated entities, when was that validation performed and what exactly was validated? And for that we have within atomic two scan capabilities. One is OpenSCAP. And OpenSCAP is a fantastic solution for looking at the compliance environment of the containers, virtual infrastructure, what have you that's supporting the application. So for example, if you have a PCI DSS environment and you need to separate the personal identifiable information from that, which is just generic information that's on the back end, you can set up a set of profiles within OpenSCAP that include everything from password complexity all the way down to logging capabilities. And one of the components that's part of that compliance environment is vendor supplied vulnerability data. So Red Hat provides it. It's also available for a handful of other sources and it's directly integrated within atomic itself. To use it, simply run atomic scan scanner OpenSCAP with whatever your container idea. Earlier this year at Red Hat Summit, Black Duck and Red Hat announced that there was an integration between the Black Duck Hub and Red Hat Atomic. And one of the values of value propositions that this brings is not just vendor supplied vulnerability data but broad vulnerability data for most open source components that are out there. Covering vulnerability, license compliance and operational risk associated with the use of those components as some entity within the artifact that you're creating as part of your Docker image. As I mentioned, it's directly integrated with Red Hat Atomic. And it has some rich tooling integration for development teams. So whether you're using Jenkins, Team City, what have you, there's dozens of integrations that are there to support the development side of things. And it's seamlessly installed using atomic itself, Black Duck software atomic. And the only difference between the usage model is that you change your scanner from being OpenSCAP to Black Duck. And so I think it's probably beneficial to have a little bit of a demo at this point to just show how easy these tools are and what their limitations are and what their benefits are. So I am joining the four pizza team and saying that we have a happy donation to the demo cards that nothing goes wrong. So let's flip over to our demo environment. So if I go and look at this over here, if I want to see. Can you make the font just a tiny bit bigger? The images that I have in my environment, it's a simple atomic images. I can run the equivalent Docker command if I wanted to as well. But I'm going to look at two specific images. So the first one I'm going to look at and run as OpenSCAP is this image here for Tomcat. If I run this Tomcat scan using OpenSCAP, we find that this is not supported for the scan. And the reason this is not supported is that Tomcat itself is not in any of the vendor supplied lists for CVE management. If instead I take the one that's associated with RHEL and run that one, running a scan against a RHEL7 container, it takes a little bit longer and will ultimately come back with a list of a few CVEs that impact this particular RHEL container. And they're posted and published by Red Hat, which is where you would expect to get the information on a RHEL7 environment. Now if I go back to that Tomcat one and run a Blackdeck scan on it, what I will see is the scanner will run and I'm not going to bore you with the results of waiting for this because it takes about 10 minutes to go and unwrap that entire Tomcat image and build the scan. In true cooking show model, I'm going to go up here and pull up the scan of Tomcat that I did about an hour ago and show you what it looks like. So the first thing we see is that we have security risk, license risk, and operational risk and that there are a large quantity of things that make up this particular container and all of the dependencies in it. And so I'm going to do a filter on just the new things. I'm going to pick on the new C and I see that I have a high security risk, medium operational risk. So let's hover over the operational risk and see what this is all about. So we have a version of 2.19. This version was updated almost two years ago. It's 15 versions behind the current version of that particular component. It's a very stable component and there is a very large number of commits that are happening to it so it is truly well maintained. So the operational risk on this is medium only because it's so far behind what the current version might be. And so we see low and we see high for those types of reasons as well. If I click on the security risk I see a breakdown of what's involved in it. I see that I've got it in a number of locations. Some patches exist, some are new. I'm going to highlight this vulnerability piece and vulnerability is one of the data sources that we take in order to determine project activity that's out there. And so in this case multiple not a number function string handling stack based buffer overflow is what this vulnerability is. We see a matrix that's showing how exploitable it is and what the impact is. And there's also a CVE associated with it. And the CVE has a slightly different description which does happen from time to time. And usually you get a much better view through the risk based security as to exactly what is going on because it gets translated a little bit. Now if I go and check on the references I have 33 references in here. 9 of which sorry 21 of which have a vendor specific URL associated with them. But I want to highlight a mailing list post. So it'll take you out to where this is being referenced. So preparing a fix for glibc 2.23. This is some of the information that's in that glibc post. I look at the next one and this is actually worth a read. There's a debate as to whether or not some of these are just memory safety issues that should be considered vulnerabilities or whether we just we probably don't have a shared understanding what glibc bugs should be considered vulnerabilities and what should be considered ordinary bugs. So you see this type of discussion that's happening on the list. And if I have as an objective to run a site this type of information while beneficial to the developer specifically the developers of the component is very difficult to understand for the lay consumer. And so that's one of the reasons why providing a very clear understanding as to whether or not a vulnerability exists or not based on the source code is so critical. And so I'm going to go back to the slides. I have any pizza. So we have a question. I can't read that for Carl. Does the container being scanned have to be based on atomic or just the host you're using to run Docker? So that's an excellent question. It can be either based on Docker itself or on atomic. Or if you have access to the raw sources, we will just be able to point directly at that project and consume that project and provide a report on it. So if you've got an upstream consumption, then that's a way that you can get it through either atomic or Docker. But if you're building it yourself, you can see where that code is coming in as well. So excellent question. And so ultimately, the goal of running a secure data center is to minimize the risk associated with the things that are running in there. And so I mentioned open source license compliance is one entity. If you're redistributing or have a requirement to redistribute, have you created this scenario where you have mutually exclusive licenses and understand whether or not those dependencies are understood? If you're looking at open source component inclusion, is there a vulnerable aspect to some component that you're consuming? This becomes incredibly important when you're dealing with forks. So being able to track back to whatever the root is and identify that a vulnerability has been disclosed against a root element and you're off on a fork someplace and that you probably want to be paying attention to that. And understanding how that component is linked in is also part of the picture. The operational risk side of things, as I highlighted, knowing how many versions behind you are, knowing what the activity level is, knowing when the most recent release is, is very important because one of the biggest challenges is being able to determine between that which is stable and that which is dead. And dead might be the project is still actively maintained, but there's suddenly an increase in reports of issues in an area of code where the maintainers haven't been intimately involved in the project for a while. API versioning is a huge issue. Identifying whether change sets and change set churn is going to be part of the mix of how your application is going to be built is also critical. And so this has all been technology-focused. I've mentioned BlackDuck a few times. It's been the BlackDuck logo down at the bottom, but we haven't really talked about who we are. We're a 14-year-old company that's in 24 countries. We've got coming up probably close now to 300 employees, and we're consistently growing more fantastic company to work for. We really are. The core value that we have is in our knowledge base. All of these stats that you see on the right are probably ancient by the time we write them, but when you talk about over a million projects, over 300 billion lines of code under analysis, that's the type of thing that we have that we bring to bear in terms of analysis of the source code that you've got and being able to identify the vulnerabilities that are in there. This is the piece of the heart of the BlackDuck Hub Security, which starts out with a hub scan. And that hub scan can come from an atomic scan. That hub scan can come through Docker. That hub scan can come through any of our scan agents, which can be integrated into other tooling or STLC purposes. It creates a set of fingerprints for the source code that's there, sends it up to the hub application. All that stuff is on-premise to you. All that we do is we transmit those signatures up to our knowledge base. That's because the knowledge base is so huge. You wouldn't want to be in a position where you were having to continually update the knowledge base as new vulnerabilities came out. You'd be latent by definition, so having us manage that is a huge value proposition. But we're part of the solution, but only one part. Knowing what you have running, that's huge. Knowing why you have running it is critical. Have it running is critical. You need to have some form of proactive vulnerability response process. The attackers out there, they're getting increasingly sophisticated. You need to know what's going to happen once you've done the deployment. We can take and create a container that is absolutely pristine in terms of its contents, and have a reasonable level of confidence that at some point in its life cycle, somebody's going to disclose something about it that's going to make it slightly vulnerable, maybe a little bit of a tarnish on it. Understanding what that response process is and what you are going to do in the event of that happening. That's the critical step. You want to invest in defense and depth models. You don't want to be in a position where you're putting all of your energy in the perimeter, hoping that the service provider is going to have all of the answers for you. Relying on the underlying infrastructure to provide your security is an important part, but you should be having ownership for what is deployed and how you're going to work with it. You want to make the developers and the ops teams part of the solution. You want to make certain that they're making the right decisions around what components that they're selecting, what versions that they're selecting, how they're going to deal with version management. And focus some attention on vulnerability remediation, because vulnerabilities exist and will continue to exist for the foreseeable future. And ultimately, it's up to us to together build a more secure data center. And as part of that, I'll leave this up as well. We've got a couple of free tools. We've got a Docker container security scanner. So if you want to get it at the Docker level, by all means, take that URL, click on it and point it at the container that you like and get a scan result back in a few minutes. We also have a 14-day trial to the hub. You get all the power of the hub, including the integration with Red Hat Atomic. And so I'll leave that up on there. We've got any other questions? I'll put my headphone back in so I can actually hear Diane. Yeah, that's okay. It's a pretty awesome presentation. And thank you. And I do have my hoodie on today. So I'm feeling like I should ask some real good attacker questions. But I have just basically, this was great for me because basically I create containers and I put them out there for people to use. And then I forget about them or I leave them there, whatever, and they get used by lots of people. But if this seems to be a preventative thing, so once a container is deployed, is there a way that we can use this Black Duck approach to scan something? Because containers get updated while they're in use as well. So I'm hearing you right. You're scanning before deployment, but then after containers deployed, things may get updated in there. It may have other things added to the container, especially if we take a layered approach to it. So is there any where in the life cycle of the application deployment? Does the Black Duck tooling and the scanning take place? It sounds like it's pre-deployment. So it definitely will be pre-deployment in the majority of cases, but doesn't preclude doing it post-deployment. So consider when you're coming up with the modification that you want to put into the container. That had to go through some aspect of testing and QA. So if you have a base container, base image, and then you subsequently are adding in a couple of extra packages or components in order to build out whatever it is that you want to deliver, as those pieces are versioned, you can put it on the development side. But the scan that I did was on a container image that was present. And so if you can pause that image, stop that image, you can run the scan at that point in time. Okay, that's great. So I'm definitely going to have to give this a try because I've only used the base OpenScap one. So this is going to be an interesting thing. I'm looking to see if there's anybody else out there of the participant who has a question. You can type them in the chat or raise your hand and I can unmute you. It's more complex. Give it a few more minutes. And for those of you who might be watching this in a recording, if you want to ping me on Twitter, I'm very happy to have a discussion and kind of answer whatever that burning issue is that needs to move you forward or make you happy with this technology being something you want to work with. Yeah, this is great. I mean, you did definitely put the fear of God in me in terms of trying to keep up with any of the updates. And we all know that it's almost humanly impossible to do. So kudos to you for this great offering and for the good update on how to manage our vulnerabilities. If there's no more questions, I will let us all get back to our day. And thank you very much, Tim and the whole BlackDepth team for making this happen today. And we will talk to you all again next week. And you'll be able to find this and the slides on Tim's slideshare and we'll post all of this on the OpenShift blog probably by Monday. So thanks again, Tim, and to the entire BlackDepth team. Thank you again, Diane. Thanks for watching. Take care, guys. Bye.