 All right, thanks everybody for joining. So my name is Dave Muir. I've been working with Black Duck Technologies for the last seven years. And some of you may know we were acquired by Synopsys. Late last year, Synopsys is building a business unit of a suite of software security tools. And Black Duck happens to be part of that suite. So I work in our business development alliances team working with great partners like Red Hat. And we've built an integration with OpenShift, which I'll go through in detail today and try to do a demo. Yesterday, I presented container security in the front there and just sort of a quick review. There's a lot of different security tools out there. What I'll be talking about is the middle piece here, software composition. Software composition analysis is something fairly new that's been coined in the market, it's been around for a while. Black Duck's been doing it for 14 years. It's actually trying to find the open source software that you're bringing in to your applications or to your container images. So that's what we'll be talking about today. And so let's dive into the architecture of our integration. Last year, we delivered version one of this integration. We're calling it opsite, if you hear that term. And about three days ago, we just delivered version two. So within an OpenShift environment, OpenShift has an integrated registry, or you can pull images from other external registries, whether that be Artifactory, GCP, whatever. And Black Duck has a couple components to it. One is our huge knowledge base that typically sits in our data center. So it has over a petabyte of data. We've been collecting the data for 14 years. It's essentially all of the open source files that we've collected, over 10,000 sources, and a lot of metadata around that open source as well. So think about, we have all the versions of OpenSSL, Apache Struts. We have over 80 plus programming languages that we can support when we're scanning and identifying open source. So the knowledge base is typically in the cloud. It can be delivered on-prem. But the second piece to the architecture is what is called the hub. The hub is the web application that stores all the results after you do scans. And it produces a lot of different metadata, which I'll talk about when I get to the demo. Within the integration, we basically are looking where an actual project within OpenShift. So when you install it, it creates a project and creates a set of containers. A couple of those containers are looking at the OpenShift to Kubernetes API. The ImageStream API, which is specific to OpenShift, and then the Podcreation API, which is a native Kubernetes API as well. So whenever an image hits those events, it's created, you do an S2I, you instantiate it in a pod, our processors will find that image and speak to another core container that then launches a scan of that image. Essentially, what's happening is we're exporting that image to a tar file, and we're scanning that tar contents. Laktuk scanner is basically a one-line command that you can scan really any files, binary source code for it to detect the open source within that scan target. There's two pieces, actually, to this pod here, this pod right here, the actual scanner that's instantiated when it gets an image event. And then the image getter is really the only privileged container running in this infrastructure because it needs to do that Docker SOC connection to do the Docker pull. There is an example of using a different image facade. If you want to use another container, it's pretty easy to swap these things out because this architecture is very modular. I'll point you in the right direction. Basically, it goes and gets the container contents and does a copy instead of a Docker pull. So there's various ways you can implement the image getter because it essentially calls two APIs to let the scanner know that it's ready to be scanned. The scan is essentially taking what we call fingerprints of all the files within that tar file. So it goes through all the layers, takes these signatures or fingerprints of the actual files, the date of the file, the paths of the directory structure, and sends that to the hub. And those signatures go up to do a matching algorithm in the knowledge base. The knowledge base then sends down the matches to the hub. And the hub builds what we call the bill of materials or the component list, which I'll show you here in a minute, of the open source it found. And then along with that, we get known vulnerabilities. We get licensing information, as well as operational information. Now, you only have to scan once unless you change an image. And after a scan, the hub and BlackDuck are continuously monitoring your image content. So you basically have an inventory of your open source components. And as you watch the duck here, if new vulnerabilities are published, they will be, the information will be pushed back into OpenShift. So I didn't mention, after a scan is complete, we annotate and label different things within OpenShift. And I'll show you that in the demo. We annotate pods, I mean label pods and annotate images and also label images. We also can be installed as an option with a Prometheus master. We're pushing all sorts of stats to certain ports. And you can use Prometheus, or you can use your own implementation to look at those ports and get statistics, like what scans are running, for example. And I'll show you some of that as well. So let's go to a demo. The internet's been kind of fun lately, so I have some canned screens. But I'll try to make the internet work. One thing to note is this integration is open source. Blackduck Hub in the Knowledge Base is a product. It's a subscription based. So once you purchase Blackduck Hub, you can get this integration for free. And we have an upstream and downstream project. The upstream project, the open source project, is labeled Perceptor. So if you go to Blackduck's GitHub site and search on Perceptor, you'll see all sorts of projects related to it. And this project is sort of an example of how you can take that image-getter container and swap it out with something else that doesn't use privileged containers, if you're interested in doing that. But it's just an example. The example shouldn't be put in production, but it'll guide you through how to create and use another container for that image-getter. So a lot of good stuff out in GitHub. Here's a look at what Blackduck would provide in after a scan occurs. So scans usually take minutes. It depends on what you're scanning. If it's one file, a scan will take less than 30 seconds. If it's an application, it typically takes around a minute or so. If it's a container, containers are a lot bigger. Some containers are. It usually takes five minutes. If it's taking longer than five minutes, you can scale up your hub. We've got job runners and horizontally scaling capabilities to make those scan times go down. But scan results give you things like the list of open source components down to the actual version. Versions are very important specifically for understanding what license is tied to that version, as well as what security vulnerabilities are known to that version. And then there's things like operational risk, which are also version-specific but also specific to the project. So for example, in this case, there's 18 newer versions available of R-Sync that I could be using. The project's stable, but there's only two contributors. So that might cause some concern. You want to make sure your open source projects are healthy and they have a pretty rich community around it. That's operational risk. License risk is more for those who are interested in distributing applications and they don't want their IP leaked. For example, if I've got an external distribution and I have GPL licenses, that's a high risk because the GPL license has obligations that says, if you use this, you have to open source your application. Lawyers obviously don't like that in commercial companies. But you can change the distribution here to be internal. GPL is fine. Or if you're actually creating your own open source project, that would be fine as well. Now, security risk, let's see if the internet's going to work for me here. I think I actually have it. Oh, yeah. So security risk is the known vulnerabilities for that open source component inversion. We use the National Vulnerability Database, but those results tend to be slow and also don't really give a lot of actionable information. The other thing that you see here is that I've got three vulnerabilities for BASH, 4246. And we're also pulling information from the Red Hat Arata data. So Red Hat has already patched these vulnerabilities. And since we've got that feed, this is the default view after a scan. We don't alert our customers that they have to worry about these vulnerabilities. But what we do also look at are the things that Red Hat doesn't have insight into, and that's the application layer, the dependencies that developers are pulling in that aren't Red Hat curated open source. And so that's the complementary nature that BlackDoc provides with Red Hat. Let me see if I can pull in. Let's see if I can do a search here. So we'll do a search for CVE 2017.563. You don't have to scan things within BlackDoc to get open source information. You're basically connected to our knowledge base. So for example, if I wanted to search on components and be proactive in terms of what components I want to use, you can do that. There's also IDE plugins, Chrome plugins that give developers insights into the things that they're looking at as they're developing and pulling down open source software. Let's see if the internet's working here. So if I click this link, this link is BDSA 2017. It maps to this CVE. BDSA is BlackDoc Security Advisory. It's the enhanced vulnerability data that we provide to our customers and look like the internet's behaving. So it's things like, what's the work around? This was fixed. Here's the actual GitHub location of the fix. You can see the code itself. Here's the exploit code. Here's the technical information. So we want to provide more actionable information to our customers. It doesn't look like it's working, but I'll try to get back to that. Now, so from an OpenShift perspective, there's two things that we're pushing back into OpenShift after a scan occurs. One is the vulnerability information. So is this image vulnerable? Is this pod vulnerable? And how many components are vulnerable? The second thing is policy management. So policy management in BlackDoc, take this screen, gives you the ability to create all sorts of policy rules based on conditions, open source, and project conditions. So for example, I can create a black list of open source components. I can create project-specific conditions, component conditions. So for example, if I go back to my list of components and filter on the policy violations, let's do that. OK, in violation, it's thinking about it. Yep, the internet's not working very well. There is a component in here that's on my black list. So basically what that means is this project is in violation. So what that means with an OpenShift is that we are then labeling pods with this information. So you can see here in this Node.js image that I created a little while ago, it is in violation. You can have multiple images per pod. So there's an overall status, and then there's a specific image violation status as well. You can see how many components. There's one component that has a policy violation. So this is through the UI, or you can actually query it. So most OpenShift operators like to query. I can do an OC describe. Let's see if the internet's working here. And I could do either pods or images and use that label. This is actually where the integration stops. We've had a lot of discussions on, hey, what do we do next? How do we use these labels? I've seen some of our customers query for those labels in the CI CD process and then auto promote images to different environments like stage or test. But the internet actually did return some stuff. And you can see the labels here on this pod as well. If I go back and let's see if we can try to look at the image here, we do the same labeling with images, but we also annotate them. So if I go to Latest and annotate and see the annotations, we've worked with Red Hat on this specification, actually. So you can see some specific image annotations within OpenShift. This is specific for OpenShift. You can't really get this with Kubernetes. They don't have that concept yet. But they may in the future. So that's how we push that information back into OpenShift. Now, the integration is continually watching. The core piece is pinging the hub every 30 minutes. The hub is pinging the knowledge base every hour. This can be configurable. So when new vulnerabilities come out, we are pushing new vulnerability information every hour. Then the actual core piece will receive that event and push and re-annotate and re-label those pods and images. So you're no more than a couple hours away from understanding if a new vulnerability hits the media, how many new images are affected by this vulnerability. Our legacy product, actually, and some of our competitors would be months. So with this new architecture that we've built over the last five years with the hub and with OpenShift, now it's a couple hours that you'll know if you're affected. So that is a deep dive on the architecture. Oh, thank you. People in the back.