 Hello, thank you for coming to my talk on building security into OpenStack. My name is Rob Clark. I'm the lead security architect for HP Healion, and I've been involved with OpenStack security for the last three years or so. I want to talk to you a little bit today about the OpenStack security group, where we started, how far we've come, and where we want to go next. So for my part, my involvement, I was asked by Thierry a long time ago to help bootstrap the vulnerability management team, and then went on to co-found the OpenStack security group along with Brian Payne from Nebula, before we started working on significant other projects, and to my eternal fame, one of the many co-authors on the OpenStack security guide. So I'm going to take you through a little bit of OpenStack history, a little bit of the reasons that we ended up creating a security group in the first place. So back in 2010, this thing called OpenStack became a thing. And now several releases have followed. This is a very, very scientific graph of the number of vulnerabilities observed in OpenStack over this time. In the early days, nobody really wanted to look at security in OpenStack at all. It was written very, very quickly to solve specific problems. Around about the Diablo S6 timeframe, the vulnerability management team was formed, people started looking at things, and the number of vulnerabilities coming in through the VMT went up significantly. And the picture on the end there is just to represent the fact that nowadays there are a lot of people looking at code. In fact, we have security tooling that's come out of the security group that does specifically this to try and catch vulnerabilities before they ever actually land in the code base themselves. So the earliest security vulnerability I could find was this one, 726359, it was, I think it was in Nova, and it was a really common, simple vulnerability that could with some bad configuration or some other failure in security compromise everything that was going on a single node. In the cactus, just after the cactus release, there was a security proposal written by Jarrett Rain from Rackspace. And this suggested a few things. It suggested a group that would be the public face of security for OpenStack. It suggested that we operate a private security mailing list. It suggested publishing and maintaining security policies and for resolving security issues while adhering to responsible disclosure policies. So that all makes pretty good sense. And it was a great proposal and it stayed as a proposal for quite a while. And then there was a follow up which was to split out the vulnerability management team as a subset of people who could just focus on dealing with this receiving and dealing with responsible disclosure for vulnerabilities. So the VMT becomes vulnerability management. The OSSG although not formed anywhere apart from an update to a wiki page saying, wouldn't this be a great thing to have? Would take on this role of the public face of OpenStack security and do all those good things. Immediately after these proposals two rather big vulnerabilities again fell into a launch pad. And when you go back and look at these bugs, which is why I've put the numbers up there for you. You can see how the process for dealing with vulnerabilities has matured over time. These were just kind of random bug reports that kind of landed. And there was ad hoc discussions about whether or not they were important and how to fix them and that sort of stuff. Whereas now everything's pretty formalized and we've got good processes for looking much more closely at vulnerabilities. So after that the VMT was launched. These are the lovely people running it today. I apologize for putting their photos up without asking their permission, apart from Grant because he works for me. So the vulnerability management team is a very small team. This is three or four people who are all involved with dealing with vulnerabilities on a code base that's absolutely huge. Biggest, fastest moving open source project in history. And there are four people dealing with the vulnerabilities that come in. So if you ever see these people please buy them beer. There's one there and there's one there. There's probably other ones sneaking around somewhere. So things go on like this, VMT processes improve. Things become more standardized. And then the security group founders, that is to say myself and Brian meet in San Francisco and everything was wonderful and great. So what do we do? So from this point where we meet, we start taking on a number of challenges. There was a bit of bootstrapping going on and the very first thing we started to do was actually an idea that came out of the vulnerability management team themselves, was to create these things called security notes. So the vulnerability management team were finding there was a lot of things being reported that didn't meet their requirements for being a full vulnerability that would have an open stack advisory along with it. It may not have been necessarily something that was worthy of a CVE or could have been a common misconfiguration or a common issue in deployments. And in fact, the very first one we wrote was selecting LXC as an over virtualization driver can lead to data compromise. And this was around, and you got to remember we got going back to 2013 containers and matured since then. This was around people naively choosing LXC over actual virtualization stacks and expecting the same level of separation for guests. Since then, we've issued about 50 security notes and we continue to issue them on a regular basis. The security guide was a project that followed this. This was actually a sprint wheel flew out to the Bay area from various organizations, HP, Red Hat, NSA, various other places and got together in a room with a whole lot of whiteboards and sticky notes and wrote a book in a week. And you can tell in places that it is a book that is written in a week. But it's largely full of good content and it's never been static. It's continued to be developed and enhanced. It is a security project. As such, it has a bug list on Launchpad. You can, if you see things you don't like there, but you don't necessarily want to fix them, just drop a bug on there and so on or jump on it. Similarly, you can just submit patches through Garrett as you would expect. So one of the next projects that we engaged in was threat analysis. Threat analysis on open source projects is difficult. It's very difficult on projects that move this fast when all of your developers are busy developing and don't necessarily want to do things like produce lengthy UML sequence diagrams or data flow diagrams or sit with you and hold your hand while you try to do a whole bunch of functional decomposition on how their code works. So because of that, this is a project that has a lot of promise but we haven't been able to push it as far as we would like. There are some published threat analysis models and documents out for the Keystone Middleware. And they're available in our Git repos and there'll be links to that at the end of the talk. It's definitely a project where I'm really interested in trying to get more people together and I'll come back to that. A little while after that, we've started more formally collaborating with the vulnerability management team. So I mentioned for the VMTs three or four people and they do an excellent job but sometimes there are vulnerabilities where it's not entirely clear if or how it can be exploited and even where because with something like OpenStack context is really important. You may not consider something to be a problem for you as a private cloud vendor but someone operating in the public space may have a really big problem with a certain vulnerability. So a few select people from the security group regularly consult with the VMT to help provide a wider context or sometimes to provide maybe some security expertise to cover off any gaps that there may be. Follow on and we're coming into the more recent past now is Bandit. Try as I might, they don't have a really cool logo for me to put here yet. So I just had to steal something from one of their slides. I'm not sure how many people were in the last talk but Bandit is a Python security code linter. It allows us to do some really interesting things with regards to finding vulnerabilities in Python code. It is a nice small project and I would say today represents the state of the art in finding vulnerabilities in Python which may be more of an indication of how much time people have spent building things to look at vulnerabilities in Python but and certain vendors may be working on it. But right now this is what we can use. We can put it in the gate. It detects a whole bunch of interesting stuff and I'll talk to you a little bit more about that. In a similar sort of time frame we've been working on Anchor which is an ephemeral PKI system designed for OpenStack. Anchor is designed to solve a whole bunch of problems around deploying TLS everywhere in OpenStack. So in OpenStack almost everything has to talk to everything else and if you've been to some of the other talks you'll realize that this creates quite a complex mess and dealing with certificates becomes very difficult. You only have to look at bugs six months to a year ago where even OpenStack services weren't actually managing certificates properly. Our research suggested that even if you did have everything configured correctly the actual client libraries that were being used in Python didn't support revocation which is a cornerstone of PKI. If I'm telling you if I give you a credential that says that you will be trusted by everything in my organization for the next two years and then it turns out it gets compromised three days later and revocation doesn't work then you have some serious questions about the integrity of cloud for the next two years. So this fixes it by doing some interesting things like this concept of passive revocation and I'll talk to that a little bit more as well. Recently we published a whole bunch of developer guidance. Now this developer guidance is it's voiced to be more conversational with the developer. It's they're linked right now off of security.openStack.org. And they are a list of common security mistakes that we have seen time and time again OpenStack developers making. At some point these will be linked with Bandit. So when Bandit runs in the gate and sees something it's concerned about instead of just going a bad thing this one's read. It'll say all that but it'll also say and if you want to understand why go look at this link. And these are again all in one of our get repos and can be changed and updated as we move forward. So in the very recent history the vulnerability management team and the OpenStack security group merged. This made a lot of sense from an organizational point of view. The VMT remains a very small group of highly autonomous individuals within the security project. And they still have the same levels of confidentiality in terms of their workflow because they have to deal with sometimes really big vulnerabilities that can come in. And obviously that can't be shared with a large community like the one that we have in the security group. So here's a little timeline running from Falsum until pretty much today. I think it's reasonable to say that we've been quite busy. We've managed to do quite a lot with not many people. And it's actually when we look at it like this, something that I'm reasonably proud to have been a part of. So the team has grown significantly, at least in terms of general membership from myself and Brian back in the day to in the order of, I think we're up to about 240, 250 people because we've had a significant number of people join this week. And I think that's partially because the security track has been so strong this week, which is great. I mean, this is one of the last talks on the last day of the conference. I don't know if you've been to many summits, but the last talk at the last day of the conference doesn't get turned out like this. So it's great to see so many people taking security seriously and I genuinely want to thank you all for making the time to come. So this brings me on to an important change and I think a big achievement for us, which is that we're no longer the security group, no longer exists as a thing. We're now an official open-stack security project. We operate as a horizontal team, much like the documentation team does, for example. We have various sub-projects, so this is kind of, I didn't think I had enough interesting graphics, so this is the interesting process we've gone through. Awesome. Okay, so the security project is today composed of these six activities. So we have the security notes, guidance, threat analysis, and the security guide, as well as a couple of tooling projects in Bandit and Anchor. This is good. At this point, we're really happy with where we are. We're kind of moving into, I think it's probably to say a phase of sort of maintenance and consolidation. We've really pushed a whole bunch of really interesting stuff and there's more that we want to do and I'll mention a couple of those ideas, but there's enough work here to keep people going for quite a long time. And if anyone's interested in getting involved with a security group, then this is a really good place to start. We have space for technical writers, we have space for coders, we have space for reviewers and editors, so you don't necessarily have to be highly technical to be involved with all this work and there's pretty much a role for everybody right now. So I mentioned the VMT and vulnerability management and I'm gonna speak a little bit towards that because I don't think it's something that gets enough attention at the conference. So as I said, they were established in late 2011-ish from what I could work out and very recently we came together and formed the security project in OpenStack. They remain autonomous and independent and deal with vulnerability reports as they come in. It's a small group, if you see them, be nice to them and I really can't reiterate this enough, do buy them beer. So this is the process that they go through when a vulnerability comes in. So it comes in and it's received in a number of ways. It could be a private bug. They also publish their pub keys if you wanna send them encrypted mail. So if there's a particularly sensitive issue and you don't feel comfortable putting up on a bug tracker, then they start working on actually confirming the problem, building up an impact description, talking to the project leads, to the projects that are involved, trying to convince them of the problem or work out if perhaps it isn't actually an issue. Go through several iterations of review. CVE requests go off in parallel. Patch management I know is something that they work really hard on because they don't wanna put patches out into the standard Garrett system because all of a sudden everyone can see that there's a patch for a vulnerability and then they know there's a vulnerability. And if it's something that the VMT are dealing with, it's potentially very serious. So beyond that, they go through an embargo disclosure list. So OpenStack powers some of the world's biggest clouds today and you can't really just pull the rug out from under these people and just release a vulnerability without giving them any notice. It's not a very professional way of doing business. So there's an embargo process where certain organizations can request that they get a pre-notification of a vulnerability. There is only a limited period of time where that between that being published and then the bug being opened to the public, the patch being made available generally and the OpenStack security advisory and OSSA going out on the mailing lists and being recorded on security.openstack.org. So security notes, I mentioned them a minute ago. These exist to provide guidance to developers. They're basically everything security related. You need people to know about that isn't a security advisory. So common ones are misconfigurations. Some of them can't fix, won't fix. So if there's an issue that the OSSA, OSSAs are accompanied with patches to fix things. And if it's a big design problem or something that's not gonna be fixed in that way, then sometimes a security note is the appropriate way of dealing with it. And the note will discuss what compensating and mitigating controls you may need to put in place. There are notes out there that describe where you might have to position IDS to deal with certain threats, for example. This is the part of the talk now where I'm gonna go through a few of these projects and try and get, I'm gonna highlight the areas where we're struggling. The areas where we could do with more investment in terms of time, effort and people to improve these things. So we really need more editors and more subject matter experts getting involved with the security note process. We're also, from a technical point of view, looking to build more of a searchable format into the way we store and manage them. And this is just a growth problem. It was kind of okay to just basically have plain text files to start off with, but now we want to be able to look back over vulnerabilities over years. We want to be able to categorize them. We want to enable you as operators and customers to say, well, I am, for whatever reason, using Kilo and I'm using this service. And I'm using the up-to-date version of that service if it's supported. So I'm not worried about security vulnerabilities, but tell me all the stupid things I can do to mess this up. Which is what OSS ends up there for. And they'll be running everything in a single network or doing other silly things like that. The security guide is, so who here has read or seen the security guide before? Wow, that's great. That is really, really good. So the security guide is available. It's linked off of security.openstat.org. It is available in a number of formats. You can just browse it in HTML and it's nice and searchable. That way you can download it as a PDF. And it's also available in actual sort of tree form from Lulu. I would recommend perhaps waiting a little while because the version that's up there is the original one. So if you want really up-to-date data or up-to-date guidance, then it's better to pull down the PDF which are built from changes. So we have a full CI CD for the guide and updates for it just get written as anything gets updated in OpenStack, go through Garrett and nice build magic happens and we get PDFs at the end. So it's a living document. There are many active participants right now. We are actively looking for more people. And again, this can just, this could be editing. A lot of people had their hands up. I won't ask you to do that again, but I don't doubt that some of you may have found small errors in the book. I think from a content point of view, a lot of it's good, but there may be grammatical stuff or things that don't flow necessarily well because it was built by a room full of people in a week. We do a lot of work to kind of smooth that out, but it's a great way to get involved. The developer guidance, like I say, this is a reasonably new thing. The purpose is to help educate developers who are working on OpenStack projects. The idea is to enable them to write better code and to integrate with tooling like Bandit and not just Bandit, but also when we're doing threat analyses. Instead of having, it's nice to have one central place that is OpenStack-oriented where you can refer developers to when they have problems in their code. And this serves that purpose. Again, we need more people. The security group is at around 250 people. The number of active contributors is significantly fewer. And I think we're through this talk and gonna hopefully demonstrate that we don't need too much more investment to be able to deliver a whole bunch of really useful stuff. So the developer guidance is very useful in that aim. Threat analysis, more than any other project, this is the one that is probably lagging right now. There is definitely appetite for better threat analysis work in OpenStack. We have some discussions later today and hopefully it's something that we will address at the next security group mid-cycle. And if it's something you're interested in contributing to, that would be very good. I think we need to come to agreement between some of the bigger companies who have been involved with this. So I know some, so HP's done a whole bunch of work on threat analysis, loads and loads and loads. And what we want to do, ideally I want to work with a couple of other big cloud vendors. The size doesn't really matter, but a couple of well-established people so that we can put together the different bits of threat analysis that we have done and release a whole bunch of really useful documentation to the community to really bootstrap this. Because building these things up from scratch is incredibly hard. The only other way we can attempt to do it is to get a few dedicated people who are basically gate crashed mid-cycles for all the major projects and just drag developers off until they go through and do data flow diagrams and things. So we want to get general agreement about what we need out of a threat analysis process. The level of granularity we expect them to go to. I think in some degree we could start with something that's reasonably high level and aim to just get better general coverage and then maybe drill down further. So I've spoken to some of you in this room and I will, if anyone is interested in threat analysis, even if you're just interested in the output and you want to have some say in what that should look like, then please find me or reach out to the security group so we can have that conversation. So I'm going to talk to you a little bit about the tooling. I'm just going to quickly mention Anchor. Anchor is an ephemeral PKI system. That means it relies on passive revocation. It doesn't give you CRL files. It doesn't give you any OCSP capability, any online certificate status protocol capability. Essentially works by giving you very, very short lifetime certificates. It's backed by a decision engine so that it's actually like a fully automated PKI. It's stateless, which means that you can deploy it if you wanted to into DevStack trivially. It means that you can deploy it in silos. So if you're trying to eliminate cross connects between different security domains, you can do that leveraging Anchor. It's simple and robust, and it's also free. You don't need a service contract to be able to run this. It's under active development. We're looking for more people to get involved. We're looking for more people to come in and do reviews. We have designs around HSM integration, Barbican integration. We do want to put it up into PiPi. I expect that something will happen relatively soon. And the more interesting point is validators. The way we have deployed Anchor before, it can be dealing with thousands of certificates a day, which is why you need a fully automated platform to manage these certificates. And the validators in Anchor parlance are what check the CSRs and check all the important information that a user has provided to decide whether or not to give them a certificate for Coke or Google or whatever. So we're really interested in seeing more use cases around that. We can do really interesting things that normal CAs won't, like doing reverse lookups on FQDNs and seeing if the restful request you got from a machine actually originated from the machine that it's trying to get a certificate for and other pretty smart things. It's not silver bullet. It's not gonna solve all PKI problems for all people, but it is really cool. And there was a talk earlier this week, completely on this. It was presented by Doug Chivers, Tim Kelsey, and Tom Kamman, most of whom are in this room. So I'd advise you, if you're interested in this, go and check out the talk from earlier this week. I mentioned Bandit already and managed to sneak a little bit of Lego into my talk, which is always fun. Includes lots and lots of plugins. I imagine a number of you were probably in here for the previous talk, which was all about Bandit. So Bandit can sit in the gate for any major OpenStat project and detect vulnerable bits of code in Python. And it can report on them. At the moment, most people have it configured non-voting because such systems will inherently generate false positives. And when you start minus wanting developers code, it's a really good way to get thrown out of the gate. So what this does is reports on stuff it's seen and the idea and what I would like to see is cores go and look at this before they plus two something. They look at the output and it won't take long. These are all smart people. They just don't necessarily work in the security space all the time. But once attention is drawn regularly to these things that happen, I expect that they will be caught very, very quickly. So it's status, it is actually up in PyPy now and integrated into a number of gates. At the moment, it has around 10 active developers. It is built for OpenStack. But at the moment, the only actual dependency it has is on PyYaml for some configuration stuff, which means it can be applied in different places. So companies like Uber, for example, are using this to verify a lot of their code. It's incredibly flexible. It runs in fractions of a second for reasonably quite large amounts of code, which means that you can use it for doing local tests as well. So it doesn't have to reside in the gate. And just as you would run talks or whatever locally before pushing code up, I'd advise people to run this as well. And actually we have bandit running in the anchor gate already for our PKI stuff. So I'll speak to you a little bit about our plans for the future. So I already shared this diagram with you. This is what we're doing at the moment. And I mentioned this idea that we really need to maintain and consolidate this. And while that wasn't a lie, there are other things I'd like to do as well. So we want to spend some time looking at hardened configuration verification. And I've spoken to a few people here about that recently. So I did a talk yesterday on hardening measures that you might want to apply in a cloud, including isolation measures, ways of dealing with vulnerabilities like Venom and the plethora of under-reported VM escapes that have happened before that in the last five years. It would be nice to have some tooling that people could deploy alongside their configuration management systems to verify that nodes are hardened appropriately. And we can incorporate best practices from the security guide. We can incorporate things that have been flagged up in OSSNs. We have some idea of how this might work already on some internal projects. And if there is good traction at the summit and following, then we can look at spinning up a proper OpenStack guide for that, a proper OpenStack project. Another thing that was mentioned was the secure API testing. So secure API testing is this idea that in the gate, we would start analyzing different bits of the API, kind of going beyond tempest for integration and really starting looking, doing some active security testing on the code that's there. It's an idea that people have had before. It's an idea that requires a lot of effort. So if anyone's got any IP that they wanna share around this or any ideas, if anyone's been doing some really good thinking on this, please let me know. It's something I'd absolutely love to bootstrap. The other project that we're gonna announce in the near future that isn't on this slide is a high level project within security for monitoring all of the activities that are going on within OpenStack to bring native encryption for the various services that exist. The idea here is that you'll have one location you can go to to see how far away you are from being able to deploy OpenStack with an over ephemeral encryption or Swift encryption or glance image signing or Swift encryption or any other service that is looking to provide sort of enhanced levels of information assurance by incorporating crypto. So the idea is to shine a light on this for operators so they can see how things are going as well as provide developers and people who are interested in contributing one place where they can go to see the status of these projects and see where they can get involved. So in summary, the security project and the VMT provides vulnerability management, guidance to developers, solution to PKI challenges that a lot of people are facing, security documentation on how to build a reasonably secure cloud, a Python tool for linting security issues in the gates, and we were officially recognized as an OpenStack project recently. So the next steps to build better ties with each project. One of our initial aims was to have kind of like an embed type model where we would have one or two people from the security group heavily involved in each project or try and incorporate one or two people from each major project into the security group more regularly. And in some places we have managed this. So the PTL from Sahara quite often hangs out with us in the security room and we'd start to discuss some of their issues and it's something I'd like to see more of that is to say developers from other big projects who are dealing with security challenges just dropping into the IRC room when we do our weekly meeting and just saying, hey, I've got this interesting problem because there's nothing security people like more than just jumping on interesting problems. I want to continue to increase security evangelism around OpenStack and just push good practices. I really need to try and drive the threat analysis stuff over the next release and I will be working with anyone who's interested in helping with that. And one thing that we've discussed internally before is adding stretcher to aid with training. So trying to help develop a better kind of security awareness and education plan for OpenStack. Somewhat for operators, we're already taking steps in that direction with developers. So in summary, we built a security group and stuff is slightly more secure. So questions? Okay, so the question was with regards to the crypto project that I described, would we have core reviewers for where this is all coming together and would they be willing to contribute? Is that an accurate? Okay, so the answer to your question is no and yes, I think. So we wouldn't necessarily have cores. I'm not looking for the security project to take ownership of this work that other people are doing because that's not fair. The work on the Swift encryption and everywhere actually it's been independently led much of it by APL, some of it by IBM, some by HP and other organizations as well. I really wanna bring them together in one place just so we can try and shine a light on it. So the reason I say yes to the second part of your question is because I absolutely wanna drive developers to that work to try and help with it but in no way do I have plans for the security project to try and annex this stuff and make it. We have enough things to try and deal with but I think the crypto stuff's important enough to enough people that we should shine a really big light on it. Cool, well if there's no more questions and thank you, I really appreciate your time.