 Well, hello everybody and welcome again to another OpenShift Commons briefing, this time an incredibly timely talk on security, open source security and containers from our friends at BlackDuck, Tim Mackie's with us. We're going to let him do his presentation and talk all about the goodness of what BlackDuck's doing there and some of its, with all the patchy strut accusations from Equifax as that's the problem of the world, BlackDuck's going to help us figure out how to solve those and prevent that stuff from happening and give us their insights into this space. So, Tim, without any further ado, I'm going to let you go ahead and introduce yourself on your topic. You can ask questions in the chat, I'll try and answer them or there's a couple other folks from BlackDuck that are on too, they may answer them, but we'll have live Q&A at the end, so we'll go for it, Tim. Thanks. Well, thank you, Diane, and welcome, everyone. My name is Tim Mackie, I have a senior technology with BlackDuck software and I'm going to talk today a little bit about managing container risk and this talk is going to be at a little bit higher level than just OpenShift specific for probably about three quarters of the talk and then we're going to kind of dive down into how this is all going to benefit an OpenShift environment and the container infrastructure and so forth is actually what we all want to be using and deploying and being successful with in an OpenShift environment. So, I'm going to start out with a set of assertions and my first assertion is going to be that from a development perspective that we have for practical purposes baked security into our SDLC and that we have effectively followed security-driven development and deployment model has developers being empowered with security information. Security-driven release policies, trusted components that we have as part of our CI loop, security testing is baked in that binary artifacts only get created if those policies are met and that we are signing things that we're supposed to sign and that images are being stored in trusted container registries and that they're only deployed from those trusted registries and that's my assertion, that's my starting point and if that's those nine things are actually met, the question then becomes, well, what could go wrong? The sad thing is is that quite a bit can actually go wrong and so CSO magazine earlier this year had an article out by Maria Korolov that said the easiest way to get fired in 2017 was to have a security breach. I don't know exactly how many people at Equifax are now potentially looking for jobs but this is the reality that IT lives in today and part of this is born on regulations that are global in nature so for example in the EU there's GDPR in Canada, there's a proposed legislation called PIP EDA and they're basically setting regulatory requirements for organizations that have personally identifiable information on their customer base around how they disclose, how quickly they disclose and the details under which they disclose with a set of penalties and in the case of say GDPR there's actually a percentage of revenue that's associated with those penalties and other scenarios we've seen execs like the CEO of Target actually get fired as a result of the breach they had a few years ago. And to kind of like focus this down a little bit IBM and the Pondamon Institute annually put out a cost of data breach study and there were three items in there that really caught my eye the first being the average cost of the data breach being a little over 7 million and that the lost business from it was a little over 4 million but the shocking number was that the length of time that it took to identify and contain a breach was a little over 6 months. Now if we put in perspective with the Equifax breach from last week it came out overnight that the attack vector that was used was an exploit of the Apache struts vulnerability that was disclosed in March and Equifax made a statement that they discovered this in July 29th of July to be specific and that the data exfiltration that occurred was from May through to then if you take the March timeline for that struts disclosure and move forward to the 29th of July well that's under six months which means that as horrendous as it is to make this statement Equifax actually did a better job than the average person or the average organization so this is the type of world that we're actually living in. Now a lot of people go and say well you know what from an application perspective it doesn't really matter our infrastructure guys have thought of everything so if I look at a build out I've got some users some shiny happy people on the left I've got some perimeter defenses and I've got my data center and if I build out what's a compute node inside of that data center I may have a hypervisor in place that has its own set of services which includes an SDN network for ensuring that only the traffic that's supposed to be there is there. I have some form of security service in place that is looking for malicious activity within the virtual machines themselves. I have obviously a virtual machine that is going to be containerized and this allows me to have multi-tenanted segmentation. I'm going to have a minimal OS like Red Hat Atomic and I'm going to have some number of containers in here and because this is a virtual environment I'm going to replicate it to however many container VMs are necessary. If one of these containers happens to have a component that is vulnerable things get interesting quickly so we change our shiny happy people into a malicious actor and now let's assert that that malicious actor was able to compromise that vulnerable container well they're now on exactly the other side of all these perimeter defenses and so they're now in a position where they could potentially mount an attack from a compromised container to another compromised container despite all of these structural rules that are in place notwithstanding the fact that a lot of the vulnerabilities that we see in web-based frameworks require a reconfiguration of perimeter defenses in order to even detect the patterns of attack. So all when trying to put security in place in a large scale infrastructure is truly to question everything and continually reevaluate the trust of what's out there but we should be looking at things like where does your base image actually come from if you're building it locally when was it brought down what is the health of that base image within the Red Hat Container Catalog we have now a health index that is showing how up to date is does it have any known CVEs in it when what's the patch interval what is truly the health of that image if you're running through a set of build servers before who trust them do you trust them themselves is there a way that a foreign container can start in your environment are you allowing say an OpenShift template to come in are you allowing users to go and access containers that are referenced from Docker Hub directly do you have the provenance in there if you're building base images from outside what happens if that registry goes away what happens if the tag that you're beholden to goes away or changes because after all tags are immutable what's the process to determine the impact if there's a new security disclosure all of these things are part and parcel to the trust model that needs to be put in place as we grow and as we mature and if at this point you say wow my brain it's in this is an awful lot of work right is and yes most people would be right in saying this is going to hurt their brain so let's let's take a little bit more deeper dive into how we can better manage some of this because at the end of the day we don't want anyone to get fired as a result of the data breach that these these things are manageable if you understand the information flow and so one of the things that challenges enterprises and I'd be willing to bet that this is something that challenged Equifax is that open source doesn't play by the traditional commercial proprietary software rules if you take a look at a pure upstream project compared to commercial code when a project decides that a given version is end of life there's no opportunity to pay a boatload of money there's no dedicated support team with SLAs there's no staff of security researchers there's no transactional relationship between a procurement house or a procurement team and a quote unquote vendor now obviously as you move from pure upstream you can get organizations like Red Hat who are going to very nicely provide support services and curation for all of this but when we look at the infinity of open source it is truly community based activity it's truly a scenario where if you forked if you've done anything that's deviating from that distributed component ultimately you're the one who's responsible and you need to establish that relationship so just kind of a little bit of a point on it but I look at this media wiki maintenance release announcement for versions 126 25 24 and 23 there are two key pieces that are in here there's a security disclosure that says various special pages resulted in fatal errors that's in the first yellow blob that I highlighted that's the nature of the security update so if you are a media wiki admin looking to determine whether this is appropriate or not to deploy at this particular point in time that's the information that you're working with the other thing is is that there's also a note about end of life and it says please note that the 124 6 marks the end of the support for the 124 series of releases technically this ended a few weeks ago however 124 5 had issues along with other versions so it's thought fair to fix them so the community was trying to do the right thing but they're baking the information and interesting in different ways which makes it a whole lot harder if you are on a security response team to determine if you are impacted by a specific notification and we're seeing today is that attackers themselves are getting incredibly resourceful and you need to be just as resourceful if not cunning in how you respond to it so here we have a potential attacker and this attacker has a job to do the job is to determine whether or not there's a set of vulnerabilities against a specific set of configurations or platforms they create their attack when they test it against platforms and chances are a couple of iterations don't go so well but they're very sovereign and they're going to iterate and eventually they're going to find something that's successful now that success might not necessarily be uh something that supported the original thesis but a success is a success is a success and they're going to claim victory and move on and so in order to move on they have to create a deployment vehicle or utilize one of the multitude of deployment vehicles that are out there to take this attack that they've now created and package it up for utilization now they have a trust issue themselves they need to be able to demonstrate that this in fact works so in all likelihood they go and they create a video that goes up on youtube showing exactly how their attack was able to compromise this system and gee whiz isn't this a wonderful thing you should be using it too and if this looks a little bit like an sdlc that's because it is this person has a job to do they're working for someone who has a an end goal of being able to build something out but this is something that also happens a little bit um off in the shadows and occasionally things like equifax or uh as i referred to it the pr department gets involved and nature of their vulnerability that's out there ends up getting a ton of publicity and so that increases their credibility increases the value of what they've accomplished this is their business and this is what we're collectively fighting against so talked a little about stuff that's theoretical um i'm going to actually make this incredibly real and decompose a specific vulnerability and i'm going to decompose a vulnerability that was uh made public through the process known as responsible disclosure and so under that process a security researcher uncovers an issue goes to whoever the project owner is and says gee whiz but if i do this bad things happen and i assert that that's a security issue uh collectively they work together to determine exactly what the scope of it is and create patches uh those patches are then brought downstream into distributions and the idea is that until those patches are actually released nobody outside of those core team members actually knows that the issue is happening or that that could be out there so indeed composing this vulnerability i've decided to choose a vulnerability from last fall uh that impacted a lot of the systems that we're dealing with on a daily basis specifically the linux kernel this was an embargoed vulnerability which is the term that's used uh when you're working through responsible disclosure it was given the name cve 2016 51 95 and cv stands for common vulnerability enumeration 2016 just happens to be the year in which was allocated and 51 95 just happens to be the sequence number that was associated with the block of numbers that this was run through nothing particularly fancy about it now the upstream patch was created on the 18th of october by linus and this is his commit message and i highlight a couple of important pieces in here first piece being this is an ancient that was actually attempted to be fixed once 11 years ago but that was then undone so what we've effectively established is a set of commit IDs that i didn't highlight that go back 11 years and represent the timeline for this particular issue now there's a whole series of forks that happened over that time period so there's going to be multiple branches of the kernel that are going to be impacted by the patches well the next piece that i want to call out is the last highlighted section which says also the vm has become more scalable and what was a purely theoretical race condition back then has become much easier to trigger and so what that really is saying is if we look back at the types of servers that we were working with 10 11 years ago they were single core machines maybe there's some hyperthreading we might have two or three sockets two or four sockets in there three ever actually worked two or four sockets in there so there wasn't a whole lot of concurrency race conditions love concurrency today you can get 12 18 24 core sockets and there's a ton of concurrency in there throw a second socket in and you just doubled the nature of the concurrency and when you're dealing with something that's a copy and write issue as this was race conditions can be particularly problematic so this is linus's commit message on the 18th of October on the 21st of October the embargo expires there's tons of media coverage um the silly dirty cow branding dirty cow ninjas created um where you can buy silly things in their shop including uh absorbed in the price t-shirts and um coffee mugs patches are available from uh all major distributions and the embargo expired and various people start to make uh assertions and the timeline moves forward so in the u.s and canada we have a lovely little fall festival called halloween where people love to dress up in silly costumes and kids dress up in silly costumes and go door to door and look for candy and grown-ups party in their silly costumes and we have a lot of fun at black da and um well this is madeline madeline's one of our uh field sales people and she decided that she was going to dress up as dirty cow as part of her team um and they actually won contest and that's the 31st of October now if you've got media coverage for this vulnerability the logical place where you would expect to have security information would be in what's known as the national vulnerability database the nbd sometimes people refer to as mitre um actually had no meaningful information on this vulnerability until the 10th of November so that's roughly three weeks of timeline from when the embargo expired patches were available people could dress in silly costumes and there was still nothing from a security perspective that was disclosed that's a pretty big time window for someone to mount a malicious attack but there's a whole series of point-in-time decisions and information pieces that play into this so when the embargo expired various media outlets were asserting that well this was not remotely executable it turned out that it was that information came out about six hours later when the researcher said well i figured this out by looking at my web logs so yes this is remotely executable there were initial assertions that virtualization meant that this was not exploitable that took better part of a day to say well that depends on how the hypervisor is architected and so some are and some aren't then there was about three days went by where people were asserting that if you were in a container that the namespaces effectively prevented that from occurring if you look about halfway down the page of doc's you'll see once is deadbeef.c that's actually a container breakout and that took a little over three days for that to come out use a very interesting way of manipulating the system in order to bring that forward at this point in time there's well over 80 such proofs of concept out there and so if you made your decision about how to go about patching this when the embargo expired you exposed yourself to a different level of risk than if you were continuously reevaluating the nature of this particular vulnerability and the exploits that were out there so all this timing is fundamental opportunity some organizations look at security analysis as you know what i'm going to go and do a pattern static analysis so fundamentally covarity scan for example others are going to do some injection testing others are going to do some fuzzing or some pen testing on the system in reality all of these techniques are focusing on the code that the individual is creating or the organization is creating but they're not focused on upstream and they're not focused on the dependencies that's where tools like vulnerability analysis which is what black duct out i'm into play so in a full end-to-end security model you want to do static analysis you want to do injection you want to do dynamic analysis but you probably aren't going to get buy-in from your leadership team to say run static analysis on the links kernel or run it on the docker engine or run it on open shift components or run it on your stn controllers that's going to be quote-unquote somebody else's problem that's where the vulnerability analysis comes in looks at the composition of what you've got and identifies vulnerable dependencies which is it turns out most of those dependencies and most of those vulnerabilities are actually found by researchers now one of the things that i really like highlighting is that we're all fundamentally researchers when we uncover an issue that is potentially security related we should be working with the project and the project leadership and their security team to go and get those patches out and get that awareness out there is a model for doing that with generic upstream open source projects if you go to iwanttocde.org or the distributed witness filing you can see more information about exactly how to approach upstream organizations with security issues that you see so that they don't remain latent and we don't have things like an apache struts issue that runs back eight years because the bug was there that someone potentially might have known about and said here's my work around i'm just going to work it around it as opposed to actually getting it resolved and that timeline and that lack of resolution is how attackers find weaknesses all the time so one of the things that we saw about this time last year was a 620 gigabit per second attack denial of service attack on crebsunsecurity.com and this was done through some compromised iot devices think doorbells thermostats nanny cams ebr's refrigerators dishwashers microwaves anything that's internet enabled which is these days becoming pretty much everything the vector was actually through an open ssh vulnerability from 2004 with a flag allowed tcp forwarding set to true if you look at that particular disclosure's description one of the things you'll notice is that it describes nothing like what we have today for a compute environment it does not describe iot devices it looks like it should be one of these ones where yeah this is legacy it doesn't really impact me if you dig just a little bit deeper you'll find that this allowed tcp forwarding flag is set to true inside the man page it says that well this is not a security issue and i certain anytime someone says this is not a security issue it probably is and you should be having your spidey sense going um thing yeah we want to work on that this is not a security issue because it needs to have a well-known password and be publicly connected in order to be exploited it just so happened that a lot of these iot devices had things like a password of say admin admin or admin password so all of a sudden something that shouldn't be an issue becomes a big issue that'd be a patchy struts vulnerability this actually impacted the canadian revenue agency cra which is the canadian equivalent of the irs right in the middle of the e-file tax season and this had a little bit of extra press around it because they were proactive and reached out to the media to say look we are turning this e-file system off because we are vulnerable we need to get this thing fixed as it turns out the same vulnerability from march is exactly what impacted equifax last week or became disclosed with equifax last week so vulnerability response times matter awareness matters when looking at things and there's an incredible long pale we may want to kind of point our fingers at equifax and say why did it take you so long but we can equally point our fingers at the roughly 200,000 websites that are still vulnerable to heart bleed a three-year-old vulnerability in open ssl today you would expect that that should be something that is just taking care of out of normal system hygiene one of the things that this actually plays into is what we look at as an open source development risk maturity model and this is going to feel really familiar to most people at level one this is when we're worrying about features and functions we really don't care what we're bringing in what our dependencies are the state of blissful ignorance we want to make something we want to hit that MVP so that we can actually get something out there and find out that it solves the problem because that's actually what we care about as we move forward and we find a few people forking it we find a few people who've actually downloaded and might be actually using it we get a little bit of an awakening to understand how the security implications of what we're going to be working with should be attended to but we're still very much focused on features at this point in time as we move forward we get a level of understanding we start throwing in some manual review processes some fairly basic tooling we might have some spreadsheets to keep track of stuff we might try and look at some free or low cost tools and do security scans maybe prior to each release as opposed to going on an ongoing basis this is where a lot of projects are today what we really want to get to is a state of enlightenment where we have automatic identification of all of the risks as they happen and as they're disclosed and we've baked all of this into our CINCD environment so from an open shift perspective that would be hey can I bake this into my build pipelines what kind of awareness do I have around my builders so this is the model that we want to take forward and we want to automate and there's a set of criteria that we put in place around what makes for successful automation and highlights and extracts the best information for the first thing we want to look at is the factors that impact risk so item number one is a vulnerable open source component where they where are all the dependencies coming from are the dependencies on the application side or on the the user space side that's inside the container image is a component we're dependent upon a fork or a true dependency how is it being linked in these are factors that go into the vulnerable open source side of things point-in-time decisions are another problem we all want to use stable components wherever possible but that stable might be end of life and if end of life equates to dead we might have a lot of responsibility and potential technical debt to attend are there change sets that are coming down the pipe that could make it a lot more difficult to actually update to a newer version if the need dictates api versioning issues what is the security response process for the project what is the commit velocity and who are the contributors and are they changing are they stable some of this information is actually coming out of a new initiative that the Linux Foundation announced on Monday around what they're calling the chaos project to understand exactly what the true health of upstream projects really is now when we look at historical data center operations one of the things that's a hot topic is well how quickly have you patched what's your patch process what's your patch tooling look well in reality we don't patch containers we go and rebuild them we might do some ab we might put a canary out there but fundamentally we're not in the patching process when we're working with containers but we do need to question that patch process so there are a bunch of it's in time where actually struts became more vulnerable due to the nature of whatever was disclosed against it if you happen to be at a version prior to that and upgraded to a version that was worse did you actually move from the frying pan into the fire are you the fish are you the person controlling what's going on question that process we want to make certain that we're supporting the eating of container build so that when something happens in the outside world and a decision that developer made was particularly legitimate this morning becomes completely problematic due to a vulnerability disclosure this afternoon that when the build occurs that we're performing a risk assessment crossing a ticket into a defect system like JIRA but among other things we're not actually creating that container image so that it can't actually be deployed because if you didn't create it in the first place well it's kind of hard to deploy it so make certain that these gating activities are actually happening we want to build a risk profile for every single container in the system even builders so if i'm doing a sourced image and my git environment changes and i've got the sourced image build that's going to toss it into the internal open shift registry and i've got my triggers and my deployment triggers in place say vulnerable when it comes in through a bit workflow it makes sense that you might have some vulnerabilities in your container but what happens if that builder container is similarly impacted you still have the potential for a vulnerable container deployment and so you need to have that aware same thing is true when you have a build pipeline and what steps that you have within your build environment pipeline itself has some issues can you actually trust that result in container if the various scans and staging activities also have issues they too can produce problematic results when you go to deploy the last piece of the puzzle is around ongoing changes in risk so if we assume that we've got every single test done and we've got a shiny happy object and a new disclosure happened say 40 minutes ago we don't want to be in the business of continuously scanning our running containers because that's going to impact the performance of scalability of our system what we want to do is take advantage of the fact that the container images themselves are immutable perform the risk assessment on the immutable object aka the container image monitor for the composition that's associated with that container image and map that back to the outside world and say look there's a new security issue toss a ticket into JIRA let's have that workflow in place and so have a more scalable system as a result of being able to monitor through now from my perspective we have a solution that historically has been geared towards the developer experience and the release engineering experience so being able to for example provide security information within the Eclipse IDE or Visual Studio IDE work with the various package managers integrate within CI tool chains from pretty much every CI out there integrate within the security the static and dynamic analysis tools so if you're for example going through a micro focus fortify that you've got the pieces in place wherever the artifact storage is that we can scan with that what we've done over the course the last eight nine months is bring all of this into the open shift world and that's where I'm going to go and look next so our initial design philosophy was that security response times are too long that from the point at which an actual disclosure is made to the time you determine exactly what portion of your infrastructure is impacted is far too long and with the rate of new disclosures coming out it's something that is going to be very very difficult to keep up with and so we wanted to make certain that we automated everything associated with and this is our basic architecture we have a knowledge base that we host partly because it's about 500 terabytes in size it's about to become a petabyte and most people don't really want to be in the business of hosting that every other aspect of the solution is actually customer hosted so our core application is called the hub and we have essentially it's a hub and spoke kind of scenario where various integration elements hang off of that if we look the open shift environment and this can be an enterprise deployment this could be an origin deployment they all work exactly the same way I have the potential for an integrated registry and I have image stream events that are going to hang off of that obviously I have the potential for an external registry that could be the red hat container catalog that could be darker hub that could be your own internal artifact for your nexus repository what we do is we actually in place some integration elements designed around being able to listen for activities that are happening in the system that relate to immutable container objects so a new image stream is created that's associated with an image that is within the registry's purview we'll see that create event we'll also see an update in the week as well we'll also see pod creation events so that if a container image is brought in from outside the integrated environment I will be able to see that when we see it we'll go and perform an assessment to determine whether or not it needs to be scanned if it needs to be scanned we'll go and we perform that scan scan results go up to hub and hub goes and takes a look at that bill of materials and maps it against our knowledge base to go and say here's what the risk is risks are assessed are assessed by our policy engine which then goes and communicates everything back into the scan controller and ultimately annotates the images and so those annotations are actually pretty interesting so with origin one six and open shift enterprise three six we actually have a spec in place where those annotations can actually be used within an emission control workflow to decide that you know what this image is okay to deploy this image was okay to deploy now it's not so great and of course the outside world is continually changing so the policy engine will update as the outside world changes and we'll get new notifications coming in which will also update those annotations ensuring that the state of the system is say no more than an hour out of date so effectively what this boils down to is that had theoretically Equifax been using open shift in this application that was the attack vector that we've all been talking about for the better part of the last week we would have been in a position to actually let them know exactly which images were impacted within an hour of that disclosure back in March and continuously monitor for any changes so that even if a developer happened to revert to an older version because that's what they needed to do that we could have flagged it as that happened that's our piece of the puzzle but we want to make certain that we are truly layering container security and the success criteria for a truly trusted environment starts with a platform so open shift as the platform with the project atomic host locked down with all the appropriate se linux enforcing settings in place using open scap to administer all of the various policy and governance rules to ensure that there aren't any misconfigurations within the user space or for that matter on the the Kubernetes nodes map that against the patch definitions for all the red hat products that takes care of all of the core red hat side of the equation but that still leaves the infinity of open source and that's where we take over and so we literally scan any and all container images in an open shift employment including our own providing visibility into the open source components regardless of the source annotating those images with the vulnerability information and automatically updating them with new disclosure information as that occurs without any need for rescan without any human involvement it's completely automated and integrated within the system so I've bored everyone with some slides I'm going to take a little bit of a risk here and try and do a little bit of a demo Diane how are we doing for time you're doing fine if you could do me one favor and hide the bar that's at the bottom of your screen that says stop sharing in the next here you go thank you apologize for everyone that was I thought it was only my side okay so let's go and so I'm going to show exactly how easy it is to install actually you know what I'm going to show what we've got make sure I'm logged in in the right place so here's my open shift console probably gonna want me to log in again and I actually have a project called black dot scan which I'm going to delete right now and the black dot scan project is our integration elements and so once it's actually completely deleted itself and all the container infrastructure underneath it we'll go and we'll refresh so I'm going to install and bring installation I'm going to ask for incredibly difficult questions like where is my hub server which just happens to be right here what's my username that's where the version of my hub server I happen to know that version 3.7.1 I'm going to go with two concurrent scans to make this go just a little bit faster for the most part this is the value that people use occasionally if the nodes are smaller we recommend going with one concurrent scan and in very large clusters we've seen three to be beneficial it goes and reads all the components and if I go back to my open shift console I'll see the black dot scan project in place and I have a total of five containers that are here now my infrastructure itself has four nodes and the way that this is architected we have a daemon set that is on each of the worker nodes and the daemon set is listening for any activity related at the node level to images being created and deployed when it uncovers a an image that might need to be scanned it will then go and ask for permission from the arbiter itself and so I'm going to go and take a look at our pods I'll take a look at the arbiter and we see right now he's going and assigning a variety of jobs to ensure that the scans of all these fully qualified containers are being performed if I look at a controller the controller actually consists of two containers one's a sidecar which is the scan engine that isn't being used and the other is the actual scans that are being performed from a usability perspective I could kill any one of these things off and they would restart themselves because that's what cloud native computing is all about at the end of the day these scans are being performed and the information is coming up into our uh which I'm going to log into and what we see is a set of projects that are created and in each of them there's going to be some amount of registry information so this 172 30 103 10 that happens to be the console that is associated with this open shift environment sorry we have multiple open shift environments that are coming in if an image is coming from docker hub it won't be fully qualified sometimes we get ones from docker io that are will also be scanning things that are coming out of the red hat container catalog as they're used and so I'm going to go and take a look at this this image here hub documentation the version is the first 10 characters of the pulse back so that's completely immutable and I see all of the components that are actually in here so there are 221 components that are part of this particular application there's some hibernate pieces in here there's some things that are coming along completely for the ride as a result of the user space uh and in this case there are 19 components that have a high severity vulnerability associated so I'm going to uh let's see what do I want to pick on I'll pick on uh tomcat let's see what we have from a tomcat perspective I have some new vulnerabilities in place I can see exactly what the record is get a deeper view of uh what's in here descriptions how exploitable it is see some references and a lot of the time we're going to be able to get to things like discussions around the particular fix being able to occasionally find the actual exploit code so that you can actually test it against it these are all part of the normal things that we we have with specific vulnerability but importantly if I go uh this one was a Tim Arpeggio so if I go to the Tim project and I go and take a look for images I'll find Arpeggio and I'm going to look at this snapshot this is some of the annotation information that ends up being put in place so I can see the server that this was running on the version that was there what the endpoint is and this quality.images.openshift.io.policy.blackduck this is the specification that I was referring to earlier where I'm now able to flag that this is not a non-compliant policy and so if I had a mission control in place that prevented non-compliant policies from allowing for container execution this would automatically be managed by that emission controller so we bake all this information in so that the entire interaction with the blackduck hub user interface is something that's not necessarily going to be a requirement for an operations user they can go and read out these annotations bring them into a sysdig or a data dog and do the right things based off of the the information that's present there and that's one of the the second key elements that we have is that we're not forcing people into our our UI we don't necessarily need to be there we're not forcing it from an automation perspective so it gives a very easy quick installation quick results as to what the state of these container images are and so I'm going to go back to my next line and I mentioned about the blackduck knowledge base um critical component behind all of this because at the end of the day it's pretty straightforward to figure out a way to scan an image you could just decide to go and attach to the registry and export them all um how you actually do the mapping that's where the magic comes in so our knowledge base has been around for a little over 12 years now it was designed with an understanding of how forks forks of forks parallel streams of development that might actually be merged back in as feature elements but those are actually that's normal open source that's what you want to see we enhance all that security information that we're collecting from the world with the security research team that today numbers a little over 50 people we're updating as the security issues occur so even when the struts update of s2052 and s2053 came out last week we had those in and fully mapped through within an hour and being able to map those to public exploits is really crucial at this point in time we're half a petabyte of storage but we're pulling in open source information from about 10 000 different data sources where all of github counts is one of them and oddly enough i've dated these uh these stats once a quarter in the last three months we've gone and increased the number of data sources by a thousand i really love to see exactly what those are and that's on my list to do when i get back all that we have is to be able to have full end to end visibility to be able to inventory those components that are in place math them to known security issues identify those risks manage them to your governance policies and alert when the world changes around and do this in a hundred percent automated way that requires no human interaction and that for practical purposes if a human tried to get in the way that we would be able to detect that the human is messing with the system and you're trying to prevent scans from happening as an example the end objective of course is to stay out of the news for the wrong reasons we don't want any of our employers to be in the news for what you see on the screen right now that's pretty bad that's not necessarily going to get anybody any awards but it actually could be a little bit worse you really don't want to be in the news for this having your employer go before congress to explain why they didn't do what they thought they were doing in the first place that's not going to help anybody so that that's it for my presentation but i want to take this opportunity to do a little bit of a promotion black duck has its user conference it's called flight flight 2017 is being held in boston at the seaport hotel and world trade center if anyone watching this recording went to a red hat summit earlier this year that's the same area it's being held the seventh of the ninth of this year and we've got some really really good content packed in here around devops and security research and innovation we'll have some of our researchers over who'll be able to explain some of the techniques that they're using to simplify the security well it sounds like a really good event too you know and and it's this is one of those things where it's you could go down the wormhole and ask tons of questions about each and every one of them but i think these kinds of sessions and and this event will probably be a really good way for anyone who's in the system and our security or even developers who are working on applications to get a better understanding of where the risk factors are too and where they're coming from so yeah completely and and the registration code they put down there tim 99 that's the special code that we put together for the open source events that we we do that gets you in for 99 dollars so that's a huge huge savings for anyone who has to justify to their boss that flying into boston is um a little bit pricey yeah well in boston's wonderful so yeah i encourage everybody who has a chance to get and come and hear this because there'll be lots of folks i think um one of our folks uh gordon half is going to be speaking at that event as well and that is going to do how gordon yeah and and we probably will be kicking off a security sig in the open ship commons shortly um so we'll try and incorporate some of that in the upcoming austin event in december um at coup con that we're having an open ship commons gathering there as well so security's been top of mind for a lot of folks here you know there there haven't been any questions in the q and a on the side in the chat because i think um everybody's just been sort of entranced and i think you get a very nice job of demoing how to deploy and use it all on on open ship but i i kind of wanted to make a comment about you're going to um the one thing you don't want to do is get fired for any of this stuff but if even with all of the automation if people ignore the warning signs um and and don't do it this is a perfect audit trail for why you would get fired because the auditors would have all of the information there to figure out when you were notified and um you chose to ignore it so there's there's a little back to forth and as someone who used to write auditing software auditing software uh this this up is really the metadata is now there all tagged and the annotations it's it's actually quite wonderful if you're a compliance officer in any company as well yeah and to your point about the the auditing dayan um one of the things that i assert is um uh one what makes for the lags that we're seeing such as what we have with Equifax is the sheer volume of what's coming in and the uncertainty of is this my uh do i have to for example scan my entire data center or is it only in this area and these types of analysis that need to go into it if you have a tool that can go and say these are the applications which are using this thing that has now become problematic even though it wasn't yesterday um focus here that should hopefully help with the velocity of fixes and even if you can't necessarily get it down to an instantaneous fix at least you can get to that runbook for how to resolve it implement that remediation plan a whole lot quicker yeah and one thing we we think that automations is going to is going to save the world and everything but it's also um like some of the stuff that we built in with OpenShift and the annotations from BlackDuck give us the ability to block an image from being deployed and i think that's really useful too or at least throw up some um steps before something could get redeployed or um respawn up so i think this is this is all um great stuff and i really appreciate that it's here because it saves our butt quite often um and i'm really happy that you guys are doing this and the work um so as much as the the Equifax folks have been blaming this on open source things it's really all of the pieces and parts are in place to have good automation of and and secure open source usage in your applications in your enterprise and a bit of its fud um and a bit of its um but there's a good dose of reality too especially when you were talking earlier about the dead ending of projects and there's nobody um maintaining them and if you're depending on an open source project that there is no maintenance or upgrade path um those kinds of things are big risk factors that need to be taken into account but the the same thing can happen with a commercial company too it's like commercial company can go under and regardless of the slas they're they disappear they disappear yep that's that's completely true and it's it really comes back to if your organization is dependent upon something you need to be engaged with that dependency in whatever manner um commercial relationship or community yeah so thank you very much tim for taking the time today to talk to us about this um they'll be um this will get posted on the open shift blog shortly um and we'll put up the links in here um that you found and um we'll shoot it out over the internet send over the social channels and hopefully we'll get you back again and um we'll see some of those lead times trunking down from under two under 200 days or so um and it won't be around a dirty cow or a heart bleed it's just hopefully we'll we'll have a good good story to tell um with some reduction in those times we can hope thank you Diane all right take care bye