 Okay, I'm going to jump us ahead to mitigating risk, but let's talk about patch lays and see for a moment So Pat patch lays is a great way to measure security metrics whether or not you're patching quickly whether or not we're patching quickly Inside VMT. They're usually about a week latency between the reporting of vulnerability And then getting a patch out the door the question for most of you guys if you're employers as you want to know How quickly you're patching vulnerabilities how quickly you're isolating finding out there there and then fixing them That's a really good metric for knowing whether or not your security and operations and development teams are you know in synergy on how to approach security problems So mitigating risk the real goals for mitigating risk are awareness of the risk Reducing exposure diversifying the risk and investing your resources intelligently in solving this And ultimately be prepared because you know no matter what you do you're still gonna end up one day facing a real problem awareness These are the things that I'm mentioning for awareness as potential solutions auditing obviously works. We know this works We know that several of the contributing companies are doing their own interlawdates and some of them are doing their own extra law It's more of that is gonna be good for us Open-stack vulnerability database is one of those things I want to do this release where I can take the OSSA's the CVE's the CVE's related to Python dependency sets and build a First a database schema and then an API metric that will allow people to come in and do things like query based on a Configuration of options to get the statistics they need and consume them in a uniform manner This is all about enabling our community to share resources in discovering their own risk factors So better visualizations of Keystone and trust and horizon something else I'd love to get to this release I talked to a couple of guys at Verizon about doing this The thing is is if you look at the quantum visualizations that we have they're kind of great because they show how the network models exist to the users When we start generating trust bearing certificates in Keystone the problem is is people need to know they're there They need to know that they haven't expired that they've handed one out to someone else Going out of our way to make sure that there's a mechanism for the users to know what they've given away in terms of trust is important Better supply chain security things like PGP signing on Pi Pi Ensuring that our github stuff is signed ensuring that our dependencies Pi Pi and github are signed And so ensuring that packaging maintainers at Debbie and Ubuntu and red hat are all talking to each other and doing the same Or similar things and ensuring that what they put up there stays being what they put up there And of course open stack security group Participation several of the members of that team are in the audience and they gave a talk earlier today. That was undoubtedly awesome We need more people in that team The reality is is there's a lot of work to be done and not a lot of people doing it The OSSG exists that the community can get together and solve these problems and we need more participation Hopefully that will help with awareness Reducing exposure. These are some of the things that people do for reducing exposure I'm not going to claim that this is the right answer for any of you The reality is is you can do network isolation. That's something that's big We weren't we at cloud scaling don't believe in network isolation at layer two We believe in network isolation at layer three other folks. They need layer two. They need VPN tunnel and points as a service So that's kind of you figure it out on your own in terms of reducing exposure Meta clouds are something that also are big this year apparently open stack on open stack being able to deploy different security zone clusters air gapping you see for things like itar violation stuff Trusted computing integration, which is Eric Winnich's talk shortly after this. I think it's the last one of the day actually Better CI gate checking for common vulnerabilities. We don't currently do much of that We do some we have some stack analysis tools in the CI process But we can do it better and that would be one of those things that we'll try and work on this coming Havana cycle Diversifying risk is what open stack does really well. This is one of those selling points You can bring home and say hey cloud computing is designed for diverse time to diversifying risk We've all heard about the puppies and the cows and we start treating things like cows You don't care that much about them So what you're really doing there is you're saying I'm accepting the risk But I'm pushing it across many things and it becomes less overall risk So the open standards the Federation the regions the hybrid cloud models These are all designed to solve potential risk factors including security risk factors and The great thing about open-stacks diversity is that it can be leveraged for that in particular things like doing hybrid hypervisor models things of that nature All key selling points So in terms of being prepared it's this is the things are going to go bad And the question is not asked why they go bad. It's how do you respond when they do go bad and These are pretty straightforward Incident detection I found this image online because I thought it's pretty obvious That there's something going on with one of those houses You can usually find an anomaly fairly easily when you have a very homogenous data set Clouds have homogenous data sets should be easy to spot the one house That's you know a little crazier than the others and go investigate that So the first part of this is detection right you want to know when something breaks We know about the risks we accept the risks and when the risks occur That means that eventually we're gonna have to pay the piper and we plan in advance we have procedures you prepare you have resources available So these are some of the things that we can do to increase intrusion detection capabilities throughout open-stack, which is security API's We could leverage kilometer for that as an event logger We could maybe mark measure Marconi as an option for event logging These are things that we need to look into in these releases and make available precursor indications Fairly easy, and I'll get into that in a moment external reporting It's kind of important. We're not reaching out enough to each other. We're not saying hey, we had an incident Here's what happened. Here's what went down. We need more of those Operations catastrophe stories and security catastrophe stories. We had a few of them in the operations manual this year I hope you read a few of them. We need more relates to security stuff and There's security services in SAAS there's a lot of companies that show up here now like cloud passage to have solutions that they can spin up inside of a SAS environment the benefit here with cloud is you can do things like set set up a Baseline image that does scanning and make it available to your users that they can go Audit as a service audit as a service audit as a service and get some baseline metric for whether or not They need to do more work before they go into an actual Manual audit test So back to anomaly detection the great thing about the modest environments is it's easy to see when something goes horribly wrong This is from an old cluster I used to work on and you can see there's a couple of things horribly wrong here just by looking at a histogram of Volumes you can see one we probably shouldn't be looking at var lib nova instances in a histogram because it grows wildly and throws off Things and you can see that in the first couple You can also see that there's no var log on four of the machines How did that happen go investigate we discover in that case that four of the machines aren't actually four of the same machines that they're supposed to be in that rack So they're missing discs. So when they were auto provisioned, they end up missing discs Other cool stuff in here. You can see when var log grows Exponentially there's not one in here But we've seen it before where the var log histogram will start going baddy alone machines You know that somebody's either hammering an API on there where you're getting a lot of tracebacks in Python logging from Nova or somebody left verbosity on One of the other cool ones in here. I believe is there's a slash So one of the forward slashes is frequently large on one of these or larger than it should be and it's Yeah slash bar over here is enormous on this guy That was basically the result of an operations guy going into machine and leaving a large number of files behind on a machine that he shouldn't have So anomaly detections are very cool. We can do all of this with all the metrics we have available in salameter We can do this with Libvert metrics. We can do it with Nova metrics. We can do it with Keystone metrics So one of the things we need to do is share the tools to do that We need to generate a area of repository for sharing that sort of code and having that conversation back and forth So I'll drop some lower code during the year on sex stack and on get hub related to that This is the important stuff here incident response You guys know this by the night, you know have a plan Consumers must have a workflow that's known to be supported for response this closer breach other issues should be planned ahead and don't panic This sounds like this is all very easy But the reality is if you've ever been mugged we've ever been in a car accident or someone close to you has ever been attacked You know that those first few hours you're not going to be thinking 100% straight You're gonna do stupid things you're not in the hospital being concerned about somebody else's well-being and you're not gonna think to call a lawyer Having this stuff planned out and ahead of time and having that bullet point list is super important So plan ahead think about what's gonna happen when somebody does break past your kid your hypervisor and starts owning your machines How do you recover? What data do you pull? Where do you put it? And that's where we get into chain of custody stuff this stuff's important people are deploying open stack today Aren't thinking about chain of custody they're not thinking about the real capabilities in terms of chain of custody We can do things like snapshot an image pull that image off store it keep it around we can do things like Move a VM into an isolated zone But we can't do things like move an instance from one tenant to another So how do your SOC teams get in to investigate other tendencies? Do they have accounts that put them in every tendency? These are these are interesting things that could be fixed Oh Yeah, the logging in one-way DMZ stuff things like that are super important make sure that you have an audit log an event log Shell log if you're that crazy and it's one-way DMZ non-accessible always getting updated Very important stuff. I think the RPC signing stuff will help a lot with ensuring event logging works better in open stack I think that's big stuff coming in this I threw in a couple of awesome good reads in here and I don't know why it's not there, but I had a Had a GitHub link to the JSON database of all the vulnerabilities that's available now and I'm putting together more on Python dependency JSONs as well. So that'll be on my GitHub and I'll post that to sec stack and the Simplicity scales blog as well. So all this should be available for download from cloud scaling out of the places shortly So I'm now in the QA period Question and answers anyone have questions maybe I have answers or you guys have answers. So anyone Dead silence For the most part we're not involved with tenant security guidelines That's something that the vendors usually figure out for themselves But as a component of that they will ask us questions about how to implement their procedures And I think that's kind of a problem a lot of customers have this idea of well We already have these policy guidelines We're gonna follow them we're just gonna apply them and jury rig them to work with cloud And that's not really the right way to do it and that's why the government did things like Fed ramp they went Well, we have all these policies and they just don't work. So we need other ones that do work So I think there's a larger National discussion that needs to happen about how cloud security models and policies do operate I know Fed ramps not a bad starting point and parts of NIST NIST has a really great breakdown on security stuff So I would say those are two good models look at as a baseline if you need to throw them out to your customers That's what I got on that for now any other questions So well, we're already discussing the VPN as a service stuff I think networking lies is gonna be a bunch of really cool as a service capabilities They're gonna really solve the problems of specific people willing to invest effort in them But there's also, you know, the money-makers for instance like cloud passages style where you download a Baseline instance that they update the database on and do vulnerability checks One of the things I'd like to do with the vulnerability database if we get it going is something like an Automated checking this is, you know blue sky engineering at this point No idea if anyone even would ever even let me do this is do something like a state check inside of an open stack cluster Do hey, what have I got in terms of imports? We do this a lot, you know when identifying what API queries work and what don't the clients do this a lot as well So it'd be kind of nice if we could do a state check and say what versions of what things do I have plugged into? When I do an import and go let me query against the API and see if any of these things need an update And then we could integrate that with Horizon or something and it could go hey you need to update pipe on dependency X because and this thing has a CVE Right now for this version and if we could do more Notifications in that regard. I think that would help a lot. There's the problem with open stack is it's too damn big There's a lot of stuff going on there and people are having a hard time tracking it all so helping to make that more common community Supported I think is something that I'll be working on personally I don't know what you guys want to work on and that's really the issue right? It's open source Yeah Yeah, we probably could actually that's probably a good place for it We could do tempest checks against the vulnerability database for dependencies. That's not a bad idea That's something definitely worth looking at Yeah, very good idea Any other ideas Thoughts It's four o'clock now, so I think we covered this fairly quickly. Is everyone happy with where we are right now? Can I close up and head out? Okay, thank you guys