 Yes, please welcome our next speaker, Tim Mackey from BlackDuck Software. Hello, everyone. Can you hear me OK? I press the button. Awesome. So a little louder, a little louder more, a little louder more. OK, so today we're going to talk about something that is not really covered all that much in the DevOps space. You hear about DevSecOps and that sort of thing, and this is kind of like trying to bridge the gap between the mindset of an operations engineer or site reliability engineer and that of a developer. And so there's a lot of content in here. I'm going to try and make certain that it fits in. If we're really, really lucky on time, I have a demo at the end. If not, please see me later. So a little bit about myself. My name is Tim Mackey. You can tell by my hair that I've been around a little bit. My last most recent job before joining BlackDuck Software was as the Zen Server community manager within the Citrix open source business office. I've been with BlackDuck now for about six months. I do write code occasionally. I used to say that I write it in anger in order to prove a point. I also write it for cool projects. And that's some of the code that I'm going to talk about today as well. My current preferred language is the new hotness of go. Java is so far behind. And if you want to hear something about the cool things that I've done, I'm very much willing to talk about it over beers. You can find me on Twitter. Almost every deck that I present goes on my slide share. This one will be on my slide share probably tomorrow. And I welcome LinkedIn connections and so forth. So everyone, please put your hands up. I believe in interactivity. Please put your hands up. So put your hand down if you've never built a Docker container. OK, cool. Put your hand down if you've created a base image yourself from scratch. That it's not you brought in from someplace else. You've used your own base image that you built from scratch. You've installed only software that you personally built or you personally know the person who created that software. You have performed a static analysis, fuzzed, pen tested that software every single way. You've never deployed a long running or production container. So guess what, guys? All of these things, you're vulnerable and you probably didn't know it. So that's the basis behind this talk today, how to identify what you've got. And so no security talk on the planet would ever be complete without the person in the hoodie. I probably should change that to Mr. Robot at some point in time, but if anyone's ever delivering a security talk and they do not have a person in the hoodie, they have no credentials. So there is a hosting provider, Telco in the United States called Verizon. Verizon is a very large organization. The majority of their business activity occurs out of an organization called Teramark. And every year they put forth their data breach report. This is the types of activities that resulted in a breach of data in either Verizon services, Verizon managed data centers, or customers in those data centers. And the one for 2016, the one for 2017 probably won't come out for another six weeks, six, eight weeks. They found that 89% of those breaches had some form of financial or espionage component to it. People are trying to find ways to get at the data that they can then sell or manipulate or use. The other interesting aspect of it is not the cost of going and figuring out how to fix this that is the dominant factor. It's the attorneys who are trying to figure out what is the overall risk, and then the forensics analysis. Fixing the code, changing a firewall rule, things like that, those are incidental. Paying whatever regulatory fines, those are incidental charges. The third thing is that vulnerable software is going to be out there for a very, very long time. There are vulnerabilities that have been known to the world. And I'll actually have a slide on this in a little bit. For years that are still prevalent today. And so what can we do as an organization to make this a little bit better? So number one, I'm certain that at least half of you have gone through some form of threat modeling exercise in the past, trying to identify the types of activities that an attacker might use to compromise the system that you're building. The problem with every single one of those activities is that it's theoretical. The attackers decide what they care about. Case in point, the town next to me, year and a half ago, the police department was hit by ransomware. So if you're a person making a ransomware attack and you find out that you've attacked a police department and you've encrypted all of their evidence records, you have a choice. One, you can back away and say, I don't want to mess with the police. I don't want them knowing anything about me. Sorry, my bad. And make like it just went away. You can stay the course and ask for your ransom, which in this case was 300 US in Bitcoin. Or number three, you can recognize the value of what's there and ask for more. In this case, the ransomware, people didn't have that information or chose to stay the course and the police department paid their 300 in Bitcoin and then went all over the media to say, look guys, if this can happen to us, it can happen to you, pay attention. So when we look at overall systems, one of the truisms is that attacks are always mounted against the application layer. Who here has heard of an attack on a firewall? Who here has heard of an attack on an IDS system, on an IPS system? Those are rare. It's a denial of service, but it's still a denial of service against. And when you do a denial of service against, that's your last-ditch effort. The majority of attacks are actually against applications themselves, and the majority of investment is on perimeter defenses. This is very good for the likes of Cisco. Not so good for the red hats, the SAPs, the oracles of the world. Investing in application security is really where our focus should be. So let me take a look at a potential attack. Again, our friend in the hoodie, I'm going to name him Mike. Now Mike's job in life is to define, design, implement, and release an attack. So Mike has a theory. He has a theory that if I put this carefully crafted whatever or try and use this vector, I'm going to be able to have an attack that's going to be successful against a particular platform. So I'm going to test it. Like most software, it's probably not going to work the first time. So you iterate a few times. Eventually, you find success, or you find a new, unimagined, unplanned for attack vector. Once you have that success, you need to create a deployment kit for it. So for example, putting it in Metasploit, you need to document it, and YouTube is an absolutely fantastic way of documenting any of your exploits. And of course, you want to make certain that this gets the best attention possible. So don't forget the PR department. Fox News is very happy to put any kind of large scale message up on the news, and Sky News and the rest of the world will pick up on it, because sensationalism is part of the media today. If this model looks rather similar to an SDLC, to whatever you're building today, it's because it is. For Mike, this is his business. This is what he does. This is how he gets paid. Forget who his employer is. This is his job. Now if he's a responsible researcher, OK, this is all disclosed, and there's an opportunity to fix the defect in the whole. If he's not, you end up with this particular model. So with this as kind of the backdrop, it's important to understand a little bit about a vulnerability lifecycle. So it starts with who's actually responsible for the code. If we look at a closed source world, Microsoft, or Microsoft a couple of years ago, more precisely, they have an SLA. They have a security response team. They have regular patch updates. The phrase patch Tuesday has become so synonymous with updates from Microsoft that it's just an industry standard term. If you have a Microsoft piece of software or dependency on a Microsoft piece of solution, you know where to go and get updates for it. Flipside is open source, community-driven, no SLA's, broad interest. So let's take a look at a sample update. Now I brought this in from MediaWiki in previous life. I was a MediaWiki admin. And so this is the type of things that we'd see coming out. So this is a maintenance release targeting multiple versions. And specifically, I have things like various special pages resulted in fatal errors. That's helpful. Please note that 124.6 marks the end of support for 124.x series of releases. Technically, this ended a few weeks ago with a release of 126.0. So an end of life statement is put in the middle of a release like this. However, we thought it was prudent to fix the bugs anyway because we're good guys. Open source is kind of like that. You have to know where to look to find what you need to find in order to actually produce an actionable result. So that's MediaWiki. Let's decompose a vulnerability. So last year in February, we had CVE 2015 7547 got published. And for those of you who don't know what a CVE is, it stands for Common Vulnerability Enumeration. There's an organization in the US called MITRE. MITRE maintains the overall list of these things and has a handful of people working on it. Red Hat is one of the data sources into it for Red Hat curated product. So this vulnerability is not ghost. It had sort of the branding of being ghost 2.0. Ghost was a year prior. So bugs reported against G-Lib C. The bug report looked exactly like this. This change causes the pointer variable used in the function to use the wrong size if a new buffer is created after the address has been changed, aka null pointer dereference type issue, buffer overflow over on whatever you want to call it. Standard C badness. The program will crash if the calculated size of the buffer use is 0. This is the bug report. That came in in July 2015. So the G-Lib C team, they took this bug report and they thought about it, triaged it, and ultimately came to the conclusion that this was in fact a security issue. They assigned the CVE and they assigned it on February 16th of 2016. So we had the bug report in July. We had February 16th, the actual disclosure on list. So if I'm Mike and I'm trying to find a way to exploit something in G-Lib C, I'm going to be looking at the list. I'm going to see words in there that look like they're really problematic from an overall system stability perspective. So I've got a whole, what is that, six months to go and develop something against it and potentially target it. Now that particular item also said the bug was introduced in G-Lib C29, which was from May of 2008. So a particularly astute researcher could have had years to go and take advantage of this. Now that's all low risk. When a vulnerability is disclosed, there's a number of ways that it makes it out into the world, one of which is through the National Vulnerability Database or NVD. In this case it came from the cybersecurity awareness system and that is your highest level of concern. That came out on the 18th of February. So there was two day lag between the message on list and this vulnerability being disclosed. And you can see all of what their summary was, which doesn't necessarily have any correlation back to what the defect was. And for practical purposes it's rarely ever actionable unless you truly understand exactly what's in there. So now we have a two day window where there's additional increasing risk for no other reason than some people know about this and others don't. But the end objective for everyone on the offside is to just fix the problem. That's all they really want to do because once somebody knows about it they need to get a fix. So I took the CVE, put it into Google and up came VMware. Now VMware had three dates associated with it and I don't have any highlights in here. The first one is on the 22nd of February. They patched some versions of their ESXi hypervisor. Day later they patched a few others. Day commercial company, commercial software have to run through test suites, pretty normal. Took another six weeks for them to go and back port it to a few other impacted systems that were more legacy. Again, it takes time to do all these things. Not terribly untoward but it's taking longer to get there. So patches need to be available. You find it in your environment, you fix it. That's one vulnerability. They come out somewhere around 10, 12, 15 on average per day but that's one vulnerability. Let's focus this down on something that's even more targeted. So if you've ever heard of an embargoed security announcement or a security note, that's when a responsible researcher goes and contacts an organization like, say, Red Hat and says, gee whiz, but I found this problem. And here's how I believe that I'm going to reproduce it. And this is the badness that I see resulting from this. You guys probably wanna go and take a look at this and figure out how to put together a fix. Goes to the security response team. Security response team goes and works with the researcher. Comes up with usually a reasonable rational timeline for when that fix is going to be put out there but because it's so severe, they're not going to pre-announce it. They're not going to have a set of public list entries that are necessarily going to have it and everybody on the same day is going to get the result. And so this is what one of those looks like. So this is Dirty Cow. And Dirty Cow is a branded vulnerability. There's a number of these silliness things out there but it's also known as CVE 2016, 5195. So this is Linus Torvald's commit message. This is an ancient bug that was actually attempted to be fixed 11 years ago. But that was an undone for problems in the commit itself. In the meantime, the VM has become more scalable and what was purely a theoretical race condition back then has become easier to trigger. So in other words, decisions that were made 11 years ago about when to post-ship triage bump, whatever the term is that you're using for deciding that this particular scenario isn't going to happen in real life, those things have a way of catching up to you. It's kind of Moore's laws against us. So interestingly, if you look up in the upper corner you'll see that Linus was working on this on the 13th of October and posted this commit on the 18th of October. So we have October 18th get ID this vulnerability. Now, Dirty Cow, if you're not familiar with a copy on write condition, all the patches were normally issued at the same time. So Red Hat, Canonical, everyone who was involved in this potential patch stream had their patches available in the same time. Media coverage began at the same time. A lovely logo was created and in fact this one had so much silliness associated with it that there is in fact a Dirty Cow store where you can buy Dirty Cow merchandise at exorbitant rates. Hopefully making sure that nobody in fact buys them. It's 2000 US for Dirty Cow branded coffee mug. But this type of information makes it out there. And if you look at the bottom the date on that is the 21st of October. So exactly coincidence. So there's a three day lag where somebody might have been able to figure out something if they're looking at Linus's commit IDs. But there's nothing to necessarily tie it back. Now, in the US we have this lovely holiday called Halloween. I don't know if that exists over here in the Czech Republic or not. But it is a fall festival full of people dressing up in silly costumes going around and either getting very drunk, having lots of candy, dressing in silly costumes, getting very drunk. Did I mention getting very drunk and dressing in silly costumes? So at Black Duck we have a certain culture associated with us where we'd like to have fun. And one of the things we do is we have a company costume party. And so we have here upstream patch and Bargo expires Halloween. We actually have one of our sales people who in fact dressed up as Dirty Cow. That was her costume. I'm still not entirely certain who's photobombing her but we didn't realize that there was in fact a photobomb happening until we blew this up for the presentation last week. Now if you recall from the last timeline that I gave, there's a national vulnerability database. And I haven't highlighted it on here because we have the 21st of October, we have Halloween, which is the 31st of October. The NBD didn't pick up on this until the 10th of November. So now I have this huge window that goes from the 21st of October to the 10th of November where the only way that you know that this thing is out there. The patches are not only available as either you have a vendor feed. So for example, you're getting it from Red Hat, which is cool. Or you have the media. It's our industry, we know that there are a large number of people who have vulnerability scanners out there. The majority of them feed off the NBD. So they would have this massive gap, huge, huge, huge gap. And so that's an embargoed vulnerability. Now, usually people ask the question of what was our data feed on that? We actually saw it on the 18th of October as well. So we saw the Linux's commit stream. We were able to map it up correctly. So one of the other things that happened in that time window was an a set of assertions about the nature of the vulnerability itself. So for example, on the 21st of October, there was an assertion made that containers were completely immune to this vulnerability. It was not possible to trigger dirty cow from within a container. Therefore, everyone who was using a containerized environment was safe. It was also asserted that anyone who was using KVM was also safe. About three days later, it was proven that KVM was not safe. And two days after that, it was proven that, in fact, you could get to it and trigger a dirty cow scenario from within any Docker container. And it was, I think, another five days after that before it was determined that, in fact, Android itself was completely vulnerable. And there was a subsequent CVE against Android. And so the core message in this is that these vulnerabilities, when they're released, they're not static entities. Mike is out there figuring out things. And Mike's colleagues are out there figuring out things. And you need to be able to keep on top of that, which is kind of really challenging. And this is one of the things that I'm imploring you guys to do, is to keep on top of these things as you're making your decisions and ask the hard questions of, why do I have this here? What does my configuration look like? Because ultimately, what our configuration looks like helps us to avoid regrettable decisions. So how many of you here have heard about a 620 gigabit per second attack last fall? How many of you have heard about a series of hacked IoT nanny cams and DVRs? Well, that was what was used to mount this 620 gigabit attack. So, you're nodding your head. What was the attack vector? How did they actually get these nanny cams to? So, it was a vulnerability that had been known since 2004 that was in Open SSH, that allowed an Open SSH environment to act as a proxy. If you go and take a look at the allowed TCP forwarding option in Open SSH, one of the things that it says is that turning this value to false does not increase security because you would have to do other things, which were true in 2004. You did, in fact, have to do other things. They're no longer true today. The world evolved and now the default configuration of Open SSH allows for this potential to exist. And so, when you're making decisions around product configurations, you have to make the decisions based on today's capabilities, not necessarily what the timeline looked like or what the paradigm was when the configuration was introduced. Far too often, and I can say this about every single line of code that I've written, I go and I put down a set of assumptions in there, whether it's in a design doc someplace or whether it's actually in the comment block. And at the end of the day, that is a static point in time and the world keeps evolving and I'm usually wrong in my assumptions even though at that point in time I was correct. And I'm certain that that's true for the majority of you as well. So that's one scenario. Scenario number two. MongoDB has gotten a lot of the press in the last week or so for having some really bad choices in terms of its default configuration. Now, in and of itself, it's actually a secure database service. There's nothing fundamentally wrong with it. However, those people who are choosing to deploy it can deploy it in a manner that is not ideal, such as, for example, putting it naked on the internet. Probably not the best thing to do for your database. Yet, 28,000 instances were discovered of exactly that. And in fact, that was over a week where a new scan capability was added to Subon, which is a hacker search engine. And they went from being able to find only hundreds of MongoDB instances to 28,000 and the number of cents gone up from that and that was on the 9th of January of this year. And the third scenario is, remember how I said that there's vulnerabilities that have been known for years and people still have stuff out there? Heartbleed is two and a half years old at this point. There is no fundamental reason on the planet that someone should be using a version of OpenSSL that is that old on a public phasing service. However, again, Sodan has 100, we'll call it 200,000 websites that they've crawled that are vulnerable to Heartbleed as of the 22nd of January of 2017. Might be test servers. Might be things that are pilots of some form or other. It does not matter. One of the things I can say over my career is that the thing that was put in as a pilot has an annoying habit of becoming production without you realizing it. And so when that happens, you're stuck with all these bad decisions. Try and avoid the bad decisions from the outset. So that's a lot of the scare tactics and security guys like to have the scare tactics and I hope I put a little bit of fun in there. So let's shift gears just a little bit and say what the implications of this are for an agile world. So let's start with a standard scrum process. We got ourself in sprint zero and we're gonna take a look at what's in our backlog. Now I'm not gonna go through like a full methodology in here. What I'm gonna highlight are the things that we need to be thinking about. So I've got the off side, I've got the dev side and I wanna go and I wanna take a look and automate the most of this, the majority. So I'm gonna put all my strategy, my compliance and policy, my standards and requirements. Those are manual things. A human's gonna have to review those. Have I adhered to whatever governance regulation is necessary in order to deploy this service so that I don't end up with the stupid heart bleed thing kicking around. What are the security features that I'm gonna be putting in here? What are the ports look like? What protocols am I gonna be using? How am I going to be encrypting it? What are my Cypher suites? All of these things need to be decided. What are my attack models? Review all the stories and pen testing will be manual once we actually get through the sprint and have some release artifact. We can however automate some of this. So we can automate the build environment so that we're not building anything different than what should be released. We have all of our dependencies understood. We can automate some of our code review. We can automate our security testing. We can do static analysis as part of the build. We can do some dynamic analysis as part of the build. We can go and trigger off pen testing. So that actually makes it more of a semi manual. And we can also do some of the security features by putting in templates in place that automatically allow for the right decision to be made from the outset so that we don't have people who don't necessarily know the reason why something should be locked down, removed from the decision making process so that the template already takes care of it. And so that's some of the things that can be automated. And I still have one gray block at the end and that's the vulnerability management. That's what we really excel at at Blackduck. And so that's where I'm gonna kind of look at some of this from moving forward. So one of the things that we have observed over the 12 years that Blackduck has been around is that there is a legitimate maturity model associated with how open source components get selected and deployed in an environment. Blissful Ignorance is, hey, I'm building this new project. It's kind of cool. It's mine. I'm gonna toss it up on GitHub. I've got all these dependencies in there, but it's solving a problem for me. I don't ever actually intend that someone's gonna fork that thing. I don't intend that it's gonna be baked into anything else, but it's solving a problem for me. There are absolutely no policies in place when that happens because I'm solving my problem. I don't really care about the rest of it. That's where version management and dependency management, they're not really there. As an organization matures, so I got hired on. Cool, I got a job. I got paycheck. This is no longer a part of my thing for me. It's a little bit of an awakening. And in the awakening, I'm gonna define some manual processes which may boil down to, I'm going to periodically go through a review. I'm gonna periodically in check for updates. And I've got a handful of components that I really care about and I don't really need to worry about the rest. This is where a lot of the world is with containers today. You go and you yank something down from Docker Hub and you're assuming that whatever that image is based off of that somebody's doing the work for you because that container is good. It's the moral equivalent of going and taking a version of OpenSSL and just running with it. The third level is understanding. Somebody read something up there and the review process needs to be a little bit more automated. I'm gonna create a spreadsheet of stuff. I'm gonna start finding some free tools in there to go and do the right thing. And I'm going to actually do some security scanning. We're starting to see that in the world of containers today. What we need to do is get to the state of enlightenment. In the state of enlightenment, we actually have a fully automated solution to go and monitor for open source risk. Vulnerabilities, versions, license, developer energy, what have you, define your risk metric. We need to be able to get to that point. So if we're gonna design that solution, we need it to work within everything else. So it needs to be complementary to static analysis. It needs to be complementary to dynamic analysis. Static and dynamic analysis is really, really good about identifying and working with the stuff that you made, the code that you wrote. You're not gonna run that on upstream. You're going to focus that on your code itself. Vulnerability analysis is worrying about things such as the components that you're dependent upon as they're releasing updates, as they're releasing information. Now, 2015, a little over 3,000 vulnerabilities and open source components were disclosed. Last year, bump by 1,000. One of the reasons why we're going to see a lot more of that is that there is a very keen focus in the industry on the security of open source software. It's been identified that vendors have a tendency to invest in these things or maybe keep them under wraps. Open source is very transparent and open, but there's not necessarily a level of commitment of energy that you would get from smaller projects or upstream projects to go and focus on the things that you care about. There's this project called the distributed weakness project. And it is specifically looking at if there's a vulnerability and open source component reported there and they'll go figure out who the people are to go and talk to. And it's really putting a security response mechanism behind upstream entities, which is an awesome, awesome thing. So I mentioned about designing a better solution. So number one, if I'm going to look at vulnerability information, I want to make certain I have a broad knowledge base to choose from. So at Black Duck, we've got over two million projects that we actively monitor. A whole bunch of code, license types, the bottom, the next big thing. We're watching 8,800, it's probably close to about 10,000 now, but 8,800 websites for types of activity indicative of risk. And we're pulling that all in. And those manual processes, when you are bordering on enlightenment and you had to understand, you might be looking at a half dozen, a dozen, a hundred maybe. We're pulling it in from everywhere. We wanted to find things that matter. GPL license compliance is a huge thing if you're building an Apache project. They're mutually exclusive. There's a lot of different license types, some of which look like they should be compatible, but they're not. So do you have a release requirement that is now challenged by some component that you brought in? I've talked about vulnerabilities. Operational risk. How can you differentiate between a component that is truly stable, there are no changes, versus the developers have decided that they want to go work on something else that's actually dead. Both are stable. One has more risks than the other. Are you gonna see a large change set in your future, some API versioning or things of that nature? Those are huge, huge, huge deals. Does it have a security response process? Those types of pieces of information you wanna put in there. Next thing. If I'm going to put this as part of an SDLC, I wanna actually be able to gate my environment. So it's that if something gets in because of a choice of developer made, or a choice of the developer made six months ago that was perfectly legitimate but has changed since today, that we don't produce any artifacts, that we're automatically detecting that and we prevent the packaging action. So it's that release engineering isn't sending something to test to go and say gee whiz, but this is something I wanna work with. Next thing. Once we release product, I know every single software organization that I've worked in has had this mindset of we've released it, we're not necessarily caring about it that anymore. We've moved on to whatever the next thing is. So that might be my next sprint, that might be my next product, that might be my next version. What have you? I've moved on. Reality is that those dependencies, they're gonna have vulnerabilities that are gonna be disclosed asynchronous to your product lifecycle. So you wanna be able to have an assessment produced off of what was released as well, feed that back in so that you can raise the flag. The instant that occurs, have a JIRA ticket that gets flagged off and sent to somebody saying, guys we need to take a look at this. Of course, integrations matter. I wanna make certain that it all sits within the normal world. But you guys are all, well for the most part red hat people here. So I wanna talk about what we're doing. I've got a specific call to action and an invitation for you guys in here. So the first thing is, how many people just rough show of hands are familiar with OpenScap? Awesome, that's a little more than I thought. Okay, so one of the interesting things as I've gone forward and talked to a few of the field-facing people within red hat, but we got OpenScap. Why do we need with you guys? Our sales people go and say things like, well we've got this competitive thing. It's called OpenScap or OpenScap or how do you pronounce it? Completely complimentary solution. There's a lot of stuff that you get in OpenScap. That will never exist in a black duck world. As a lot of stuff in a black duck world that is very unlikely to exist in an OpenScap world. Perfect example, things that are red hat. Yeah, they're gonna be in OpenScap. That makes perfect sense. But you're probably not going to worry about MariaDB to the same extent that you would worry about a red hat product. We worry about all the other stuff that's not red hat. Both solutions are integrated within Atomic. So if you do an Atomic scan scanner or OpenScap container ID, that'll work perfectly fine. If you then go and run an Atomic scan scanner black duck, you'll see that much more. If you take a curated red hat image, run it through OpenScap, you're gonna thumbs up, you're gonna get some results back. If you take a Docker hub image and run it through OpenScap, you'll get nothing because it doesn't know anything about it. Signatures don't match. With us, you'll get information in both cases. So that's number one. And that was announced last year at Red Hat Summit and we've been jointly running with that. So that's cool. Containers that are being released, they should all go through this dual process. Second thing is around OpenShift. And complicated block diagram, but we're integrating in two areas, on the registry side and on the CICD side. So this is the integrated architecture. So in a black duck hub world, we have our data center, which contains that knowledge base. It's a roughly half a petabyte Adupe cluster at this point. So we don't like trying to ship that to people. Though oddly enough, governments do like to say, hey, we'll happily take your half a petabyte Adupe cluster and we wanna put it behind our firewalls too. So we can do that. It's just why. The rest of everything in white, that's actually within the data center of the customer. So we have this black duck hub. That's just our product. It's a central location for stuff. I take OpenShift and I say that I'm going to walk into an environment that already has OpenShift. There's gonna be stuff in that registry. I'm going to need to integrate with that. I'm gonna need to be able to pull this in. We've got a hub controller and we've got a scan pool. The scan pool is absolutely doing nothing more than taking information from that controller that it knows about those images. And so if you have one image, you have a thousand images. It doesn't matter. It's gonna pull them all in. It's gonna scan them up in black duck hub. We have policy engine in there. Policy engine can flag off of important things like, oh, are there any high severity vulnerabilities in this type of application? Have I done something wrong? Have I done anything that I really wanna care about? Take that policy engine, feed it back into our notification service that then interfaces with the admission controllers. And if you haven't figured out the OpenShift stuff is in red, our stuff is in blue. Of course, there's image stream events that are coming out of Kubernetes and bubbling up. They're going to occur whenever a new build image config or Docker import happens. And so we're gonna tie off with those as well so that we know the activity that's sitting in there. There's also build pipeline that's part of OpenShift. Could be gated so it might fail so that's why it's dotted line. We have a build pipeline engine that we're putting in there too that also feeds off the policy engine. So now the nice thing about all of this is that once this is in place, it becomes exceedingly difficult for someone to go and deploy something that has an issue or to be unaware of something that they've deployed having an issue because the admission controllers can be part of this puzzle. New events coming in, whether the images are built, imported, what have you, they're all in there. So this becomes something that is administrator defined and allows for the what happens after release. So our objective in this is not to go and say, oh, well we're gonna stop these bills which we could, that's something we can very much do today. The objective is to say, you're deploying this in production, you need to know what the correct answer is. People aren't talking about this correct answer. Here's a way of doing it. Now I hit a couple slides so I'd be on time but the registry integration piece, that's existing today with the hub control on the pool scanner. The notifications should be done within the next week or so but together we need your help and so this is my big call to action for you guys. We're doing this completely out in the open. Take that, fork it. Help us to understand what the real world of OpenShift looks like because we want OpenShift to be all the success that it can. We wanna understand all the various permutations of configurations that we don't screw it up. If you've got good ideas about how to define the policies for what happens when an image happens to have an issue, we wanna know about it. If you wanna write some code on this, I will happily accept a pull request. This is not an issue at all. If you want some guide, the code's all written in Go so it's the new hotness. I apologize if you're not a Go developer but it's the new hotness because at the end of the day, my objective is through everybody working together we can build a more secure data center and put Mike out of business. Thank you. So I can open this to questions. I can put up a little bit of a demo. It's kind of up to you guys. So I'm gonna start with questions and if it was a little bit of time, I'll throw up a demo. Dang. Oh, yes. Yes. So the question is, can I find a notification as a result of a day zero vulnerability? And the answer is maybe. And the reason for that is we're beholden to the types of research that we're pulling in. So we have a research team in Belfast and machine learning team in Vancouver and they're specifically chartered with expanding our view of the world for what's being disclosed. And as you saw from the two examples around G-Lib C and Dirty Cow, there is activity that occurs prior to disclosure that is discernible. That's what those teams are looking for. So if we know about it, we might actually be able to see day minus one. We do see that a lot. We see day minus one. That's all over in here. So the instant that we've scanned something, the very first time we scanned something, all of the hashes are now known to us. And so from that point forward, we never need to scan that thing again because we know about it. If we see G-Lib C, it's a hash that we've never seen. So let's say, for example, that it's off of a private patch queue that Red Hat maintains. That's a slightly separate issue for us because that hash would not match anything that we have. In that scenario, we can accept the hash. So you as administrator can go and put that hash in because you know that it's legitimate as opposed to some malicious hash and then go and we'll match automatically against that. So for us, that would look exactly like a KB update. One of the things that we're actively working on right now is we don't have as many data feeds from Red Hat as we would like. We're working on augmenting that as well so that we have that data feed coming in such that even private patch queues, we don't necessarily know what's in there but we know that that's something that you wanna have as important. Let's work with that. And so time for a scan, couple minutes. So day zero plus a couple of minutes. Any other questions? Yeah, yeah, note. So we get our note information from a couple of, it's more of a bill of materials problem for us than anything else. And so we do have an integration with NPM and actually if I was gonna do the demo of the NPM portion is not baked in there yet but we have that in our normal scan engine. I just have to put a dependency in for NPM and then I would have it. So yes, Node.js, not a problem. Java, Go, Ruby, blah, blah, blah, blah, blah. We'll get 40-something language is just something we support at this point. Anything else? Who wants to see a little bit of a demo? Who wants to tempt the demo gods? Okay, so I do very much say that this is tempting the demo gods. So if I have done my job correctly, which sometimes I do and sometimes I don't, so what I'm on right now, this is the container development kit. This is effectively the cluster master and I just need to do one thing. Double check that my scanner, oh, that's sad. So if everything goes well, what we will start to see over here is all of the, oh, forgot one thing in here. We are not quite at the point yet of having everything nicely packaged up into a template. And so what's happening right now is that it's done the enumeration of all of the images that are embedded in the registry. It's going and verifying whether they actually exist on the system. So you'll see a bunch of 404s going by. That's normal error messages. We're not pulling down the images if they're not already there, so we don't end up bloating up the Docker machine. And as I've got a pool here I'll move to scan engines and as they're encountering legitimate images they'll run through the scan, they'll upload it into the black duck hub and the black duck hub ends up looking like this. Now let's see if it works this way. That does not look, apologies, I'm gonna duck down. So here's, for example, a scan that we did a couple days ago. So we see the package location, the image ID, and the types of information that are available in here. So for example, this is what some operational risk looks like. There's eight new project versions in here. There's increasing commit activity and so forth. In this case, we're flagging GPL license because we haven't determined whether or not this is something that is going to be physically shipped. It's gonna be an internal service or as a SAS service. It's something that we can define on the project itself. And then a security risk. We'll see all of the maps in here. And so this is one of our sources and it identifies a type of issue that we can go and then go and view the full record on. And you can go through this review process for each thing that's in there, including being able to get things like the impacted projects as well as references for information that we've found so far. And one of the interesting things that you'll see in this particular list appear from time to time is an exploit vulnerability database location. So if we determine that this particular GitHub repo happens to have exploits on a given CVE, we'll highlight those in there as well so that way you can bake, say, Metasploit into your QA process to go and say, well look, we believe we're vulnerable to this. We tested that it was. Let's go bring in all these exploits that have been published and see whether or not they trigger this as well. And so trying to close that development loop with an operations mindset is really what the objective is here. So how am I doing on time? Any more questions? Two more minutes? Okay, if there's no more questions, I thank you very much. Oh, and I brought two What the Duck t-shirts. They are American large. If anyone wants one, please do come and grab one. Thank you.