 All right, thank you all for coming. So my name is Major Hayden. I work at Rackspace. And today, I'm going to talk to you about holistic security for OpenStack Clouds. And so just to get an idea before we get started, how many of you would say that you spend greater than 50% of your work week working on security? Oh, fantastic, all right. My people are here. OK, so as I said, we're going to talk about the approach that I take and some of the folks to securing OpenStack today. But what I want to do to get started is kind of put us in the right frame of mind. So for those of us that do security on a regular basis, we probably understand this, but the other folks might not. And let me see if I can advance the slide. All right, so take a look at this picture. It's one of my favorites. If you look, there's a storm coming in. It's in an older time. There's people on this boat. You can see some of them on the back looking to see where potentially they could go. The waves are getting bad. You could tell there's already cargo that's fallen out of the boat. There's some rocks over there. Things are not looking good. So this is one of those situations where everyone on the boat is doing everything they've been trained to do, pulling in sails, doing all these things, getting on top of the deck, helping out. But in the end, they're kind of rolling with those waves. And those waves might calm down at some point and they can get out of there. It might turn their ship over. It might slam it into the rocks. They're not sure. But they're doing the best they can. And there's really no way to get extra help. It's not like someone can go leave port and go over there. In these times, it kind of waves and rescue somebody. There's not a cell phone. There's not a helicopter. There's not Coast Guard, anything like that. And so if you don't work with security on a regular basis, sometimes doing security and working with breaches feels exactly like that. Where it feels like you're in the waves. They're rolling over you. You're doing absolutely best you can. You're doing the industry best practices. But you're kind of going with the waves until it finishes. So it can feel like a very lonely, very challenging venture. And it can get worse as you start to look at more complex systems. So when you start thinking about scaling horizontally, adding more systems, adding more networks, maybe a new data center, and then you think about all the user requirements that have to come into it. And then users say, well, I want my infrastructure right now. Well, then how do you work security into that as well? So this just keeps compounding the situation and creates more challenges for everyone. And so sometimes securing an open stack can feel like a trip to the upside down. So how many people have seen Stranger Things? I'm kind of subset. OK, it's very popular in America right now. But yeah, sometimes it feels like you're going to another world, that you're going to a flipside to figure out how everything connects together. And sometimes it pits you against users. Sometimes it pits you against people in the business. And sometimes it pits you against other people on your team. But today, I want to propose that it doesn't have to be that way, even with something as complex as open stack. I think there's a way for all of us to reduce stress, have our workday feel a little bit more like this, as opposed to a boat rolling out in the waves. So hopefully by the end of the talk today, we'll have that under wraps. And so for me, I feel like the key to this is taking the right approach to securing open stack. People have those phrases like, don't eat the elephant, one bite, and all these kind of strange phrases. I don't know why people are eating elephants all the time. But yeah, it's a completely different approach. So just to tell you a little bit about me, I've been at Rackspace for almost 10 years. I've worked with open stack since 2012. And right now, a lot of what I'm doing is taking a look at our Rackspace private cloud and figuring out how we can make it more secure, how we can make it more compliant, and then also how we can find more ways to protect our customers that are kind of new and innovative. And so outside of work, I work on Fedora Linux a lot. Anybody use Fedora in here? All right, there's some of us. Okay, great. So I work with the security team and the server working group with Fedora. And also I have a terrible habit of purchasing domain names. For some reason I'll get an idea and then I'll run out and buy it and put like a cat picture on it or something so please don't give me any ideas. And so we talked about holistic on the first slide and I think everyone has different definition of it. So the Oxford English Dictionary says, it's characterized by the comprehension of parts of something as intimately interconnected and explicable only by reference to the whole. That's really hard to understand for me. It's a lot of very large words that are oddly organized. And so to me what holistic feels like is it's a lot of very small things that when you look at each one of them individually, you may not see a whole lot of value, you may not see a game changer, but when you start piling them together, it kind of starts to make sense and it begins to have more values. So instead of being one plus one equals two, it's a multiplier, it's logarithmic in that type of way. And so at least in the United States, we have a concept of holistic medicine. And so that's where you approach a human and you say, hey look, as a human, you're obviously a body, a mind, and a spirit. So it may mean that if a human is having a medical issue, does that really require medication? Does it really require surgery? Or maybe does this person need to go talk to somebody about challenges they're having in their life? And then of course spirit means something a little bit different to everybody as well. But each one of these have to be working in tandem and if you're only looking at one, you're not getting the whole picture, you're not getting the whole spectrum of who someone is. And so if we bring this down to OpenStack, we think about OpenStack as servers, software, and a business goal. And if any of those three is out of whack, you're gonna start having trouble. So I've seen a lot of people say, I wanna do OpenStack, and they go and get servers and get some software, but they don't have a goal. They don't know where they wanna go with it. And I've also seen people who rush out and get servers and they have a business goal, but they don't know how to deploy their software. They're like, our software has to sit on one giant server that's over in the corner that we've run in the same place since 1988. Well that's probably not a great candidate to just toss into OpenStack. And so if you've worked in security for a while, you hear the whole people process and technology all the time. And these three things have to go together as well. You may look at a particular process within your organization and say, why does this process exist? But if you look at it in isolation, it may not make a whole lot of sense. But when it's brought together with the people who follow it and the technology that enforces it, it starts to make a lot more sense and have more value. And so it's not a new concept. So Aristotle said most of this, the whole is greater than the sum of its parts, especially in the case of OpenStack. So I think if you take a look at Nova all by itself, it can go build compute. That's pretty cool, it's great. Like okay, I can build compute right now. But when you think about okay, we're gonna combine it with Glance and then we'll combine it with Keystone and we'll hook Keystone into our corporate LDAP server and then we'll have sender offering block storage. Obviously there's a lot more value that comes out of it at that point. All right, so it's a whole lot of fluff. How does this actually apply to securing an OpenStack cloud? So for those of you who do security on a regular basis, you may be a little bit bored for a second, but we'll get everybody up to speed. So the first thing is assume that attackers will get in eventually. So all of your sentences when you talk about, oh, if an attacker ever breaks in or if we ever have a breach, just change the ifs to wins and just change the way you think about that. They will get inside eventually. So it's, and you have different groups of attackers as well. So if people really do wanna get into your environment, they eventually will, they'll find a way. And so attackers are on offense and they can be wrong a ton of times, but a defender only has to be wrong once for the offense to get a goal. Sorry, I've been watching a little football since I've been here in Spain. And so a lot of people say, well, why don't we just secure the outer perimeter? I'm gonna build a castle. I'm gonna put an awesome moat around it. The moat's gonna be huge. It's gonna have sharks with laser beams. It's gonna be great. No one's breaking into this thing. Fantastic. And so since I can't make a slide deck without a meme, I've decided to add a meme in here to help explain this. So if you've seen Inception, you'll like it. So you get into these conversations where you say, hey, we need to secure our open stack cloud. We need to go deeper. And then someone jumps in and says, hey, wait a minute, we just bought that expensive firewall for the perimeter. You know, it's huge, sharks with laser beams. Isn't that gonna be enough? And then you get in this situation where you just look back and you're like, no, it's not enough. It's 2016, we need more than that. And so if you've ever heard about defense in depth, that's the whole idea of it. It's just building small security improvements at multiple layers. So that way, you say, hey, look, what if somebody does cross the moat? What if the sharks with laser beams don't work or they're asleep or something happens? What are you gonna do then? And so that's the rest of the security strategy. So individually, like as we've been saying already, if you take a look at some of these changes, you might not see the value. But as you start putting these things together and looking at what you can do when everything is working in tandem, that's where a lot of the value comes from. So enough fluff. Let's get to the good stuff. I like ice cream, so I put ice cream in there. Anybody else like ice cream? Yeah, I like ice cream. So the way that I recommend people to think about it, especially people that are first getting started, is I say work from the outside in. So when I was younger, my mom, I guess thought I had terrible manners. And so she sent me to, in the US, we had something called Catillion, which in the south is like learning how to do ballroom dancing and go to fancy dinners and introduce yourself to grown-ups. And so we'd always be taught, work from the outside in. Grab the forks on the outside first and work your way in. And it's the same thing when you're doing security with OpenStack. Start on the outside and work your way in. So as we start to peel back the layers of this onion today, that's the last metaphor I have, I think. I'll quit doing it after this. So we have four layers. We have the outer perimeter, which we kind of already talked about a little bit. And then controlling data planes. And if you don't know what that terminology means, I'll go into that in detail here in a second. And then we'll take a deep dive into the control plane where we'll talk about the OpenStack services and then what I kind of call the back-end services. And then we'll go even deeper into the OpenStack services themselves with a horribly done graph-vis diagram that everyone will probably run out of the room when they see it. So talking about the outer perimeter. So this is where you prevent the people from getting in in the first place. You make it really hard for them to get in. So the goal here is convince your attackers that it's easier to attack someone else's cloud. So it's kind of a strange concept. And I heard a CSO say this at the RSA conference out of all places. And it didn't make sense at first, but in the end, those attackers are gonna break into somebody's cloud. They're gonna break into someone's environment. What you wanna do is make sure when they try and go after yours, that they look at it like, oh, this is annoying. Like this person actually thought about security, whatever. Like leave that one on the list, we'll go to the next one. And then they'll go to the next one. So that's what you wanna do. You wanna create that speed bump in the attacker's day. And so the concepts here is that, like we said, make it expensive for them to breach that outer wall. Make it to where every time they try and find another way in, they're delayed or it's irritating or something changes or something like that. And then when they do make it through, make sure you know about it right off the bat. Have some logging, have some monitoring that comes out of those logs. We'll go into some of the details in a second as we go a little bit lower. And the other thing to keep in mind is that perimeters have multiple openings on them. So let's say, for example, you have an OpenStack cloud that is partially exposed to the internet and partially exposed to your corporate network. A lot of times I'll see people will go and they'll secure the outside like crazy. They'll put a VPN and then they'll have a WAF and then they'll like, they'll have all this huge stack. But then on the inside, there's nothing. It's directly connected to their internal corporate network. And so someone breaks into something internally or maybe you have an unwilling internal attacker, maybe someone that has a compromised laptop or something. They can make it from there and go straight into that cloud without going through a firewall, without going through a VPN, no VLAN changes, anything like that. And so if we start to break this down and get a little bit more tactical, we already talked about it, like require VPN from the outside. Give them that one extra hop that they would have to get through to get to your OpenStack cloud. And we'll go into detail on some of the APIs in a second. And then also make sure you're segregating your internal network. Make it so that if someone is on your internal network and they don't need to be in that OpenStack cloud that they can't reach it. And monitor all logins, successful and unsuccessful. So unsuccessful obviously makes sense to almost all of us that practice security because it can show that maybe someone has lost their credentials or lost part of their credentials and someone's trying something or trying to break a password. But the successful ones are almost as useful. So take for example, you have a billing user and that billing user always just goes in horizon, grabs a report, logs out. Like that's what that person does once a week. But then that person logs in Thursday night at 10 p.m. and is querying every neutron network that exists on the host. Okay, well that's kind of a problem. So look for those things. Look for that behavior that's a little bit unusual. And then track the bandwidth usage trends. Know if someone is exfiltrating data out of the environment. Know if they're pulling something out. Is someone downloading all of your glance images? Is someone dragging data out of VMs that they shouldn't be doing? So monitor that as well. And so if we put this into visual form on the left, you would see the access to the internet. So the suggestion here is have a VPN there. The role of that VPN, number one is to create a, well not really an air gap, but create a gap between the outside world and the OpenStack cloud. But what it also does is any traffic that comes through there is attributable to a user. So any valid traffic that comes through there, you can attribute back to the user that did it. And in valid traffic you can go back and say look, someone's credentials are compromised or maybe their laptop has been compromised or a mobile device or something like that. And then on the right side, have everything feed into a logging system that then you alert on. So go in there and understand what does an unsuccessful login look like? Or what is, if someone fails to log in to Keystone a certain amount of times, do you alert on that? So have all that fed in and NetFlow as well. Look at that bandwidth monitoring. Look at the metadata within the packets themselves and really try to understand what that traffic is. And so if we go a little bit deeper, we get into this concept of control and data planes. And so this may be kind of a foreign naming scheme to some folks, but within Rackspace this is what we call it a lot of times. So we think about breaking these things up. The control plane to us is all the OpenStack services like Keystone, Nova, Glance, Neutron, but then all the services that help those services run. So RabbitMQ, Memcache, MySQL, these kinds of things. And then you have what we call a data plane. And so that's where you're gonna have hypervisors, you're gonna have all the tenant infrastructure, maybe their networks, maybe their VMs, maybe their containers, any storage that anyone creates. And you can imagine, you don't really wanna have these two things interconnected heavily. It could create problems. So at this step, what our goal is, is to keep the inner workings of the OpenStack cloud separate from the tenant infrastructure. And there's quite a few reasons for that and we'll go into that in a second. And so the key concepts here is that the tenant infrastructure has to have very limited access to that control plane and vice versa. You don't want someone that maybe has a misconfigured VM, let's say they don't have a security policy applied or they don't have a firewall applied or their password is password or something like that. And someone breaks in and uses that level of access to wander into Nove API, let's say, or wander into MySQL. Like maybe someone's got a MySQL server where they never set the root password and someone just kind of wanders in and dumps all the data that's in there. And then also we'll talk about protecting your cloud from exploits where someone can break out of the virtual machine and get access to the hypervisor itself. Because that may be where you have Nova compute running and we'll get into that a little bit more with the OpenStack services. So specific things to do there is separate these three things, note that there's three, there's the control plane, the hypervisors and the tenant infrastructure. Make sure all three of those are separated so that way traffic cannot easily get between any of them without going through a firewall, without meeting a policy. So worst case scenario, let's say someone had a VM exit exploit and they get access to the hypervisor as a normal user or as root. You don't want them to be able to go use that access to then go wander into your control plane and get more access to the environment. And then always use SCLinux or AppArmor or Tamoyo on hypervisors to reduce the impact of a VM exploit. So anyone who knows me well knows that I'm a huge fan of SCLinux so if you turn it off, it makes me sad, as well as Dan Walsh. And so if you're not familiar with Linux security modules, there's three main implementations. AppArmor, you're gonna find a lot more often on Debian and Ubuntu, SCLinux you'll find on CentOS, Red Hat, Fedora. And then Tamoyo, a lot of people like it. I think it's really popular in Japan last time I talked to some folks. And then when you look at Libvert, what Libvert does when it goes to actually build a VM is it ensures that that VM is labeled or has an appropriate AppArmor profile so that that VM can only touch the storage and the networking and any other devices that it needs to touch. So the nice thing about this is that if there is a VM exit exploit, like let's say someone knows of an exploiting KVM or in the Linux kernel that they can use to break out, they end up being inside this little red box that you see on the slide. So they break out but now they have access to the hypervisor but they can't wander out of a very small little sandbox that they're in. So sure, they could go and destroy the disk for that VM but if they were root already they could have destroyed it from the inside. Now they're just destroying it from the outside. And they can only touch the network devices or other devices that have been assigned to that VM. So they're pretty much stuck. At that point they would need to have a vulnerability in SE Linux or AppArmor or the Linux kernel to be able to break out of that and keep going further. So the moral of the story here is do not disable SE Linux or AppArmor on your hypervisors ever. It's a terrible idea. And the performance impact is very, very small. The last, I think the last study that I saw so the performance impact is like less than 1% of CPU to have it running on your average host. So that's very, very small. And so now as we go one level deeper we're digging into that control plane. So that left circle that you saw in that slide before. And so this is where we wanna restrict lateral movement. And if you're not familiar with lateral movement what we're talking about here is like let's say for example, an attacker finds a way to break into something within your corporate environment. Maybe they get access to a printer that hasn't been updated or they get access to a laptop or something's infected. They get that initial foothold and then they start kind of moving around to see what's available around them. Like a lot of times they'll go and try and find Active Directory or LDAP and things like that. But sometimes people might be able to break into one piece of a cloud and then kind of move around and get access to other things. And I'll talk about the crown jewels regularly through here. And what I'm really talking about here are the databases and the message queues that are involved with OpenStack Clouds because that's your critical infrastructure for running the cloud. And so we kind of split into two more circles now. So the circle on the left is the OpenStack services. That's all the like Keystone, Nova, Glantz, Cinder. Then the back end services where you'll find MySQL, RabbitMQ and Support Services and hopefully your SysLog server as well. So the crown jewels are in that red circle. If someone gets access to your database, your Nova database, they can go and deny service. They can add access for themselves. They get access to the Keystone database. They can have a lot of fun in there. And then also if they can inject or reject messages from the message queue, they can also cause problems. So if you're trying to send through a password reset, they could go and block those so that they wouldn't be able to get into the environment. Or they can inject messages that says, shut all the servers off. Shut this one off, shut this one off. And then you're kind of stuck trying to figure out what's going on. So if we dig into the key concepts here, obviously we want to allow the least amount of access between these two groups as possible. And we want to be able to restrict it down to the source destination and the port. So for example, if Nova needs to go and talk to MySQL, put an IP tables rule inside of the MySQL container. Hopefully you're deploying in the containers and not on bare metal with OpenStack. Get a firewall rule in there and says, hey look, I am MySQL. I will accept connections on port 3306 from the Nova API IP address, which is this. And that's it. And then go ahead and add those IPs in there. If you're deploying with Ansible and you have inventory set up, that becomes really easy to template out. And so as I said, if you deploy these services in the containers, you have a lot more fine-grained control on process limits, networking, all that kind of stuff. There was a good talk from IBM earlier about securing things within containers. And that's a key concept that helps a lot. It's something we use as well. And so when we think tactically about this, you can create that load balancer of firewall between things like Nova and MySQL or Nova and RabbitMQ. The nice thing there is you have a choke point. So it's a choke point for monitoring. So you can monitor like hit counts on a Cisco device or you can monitor throughput or net flow. But the other nice thing is, is that if you get into a breach situation and you're like, what is going on? What are these people doing? We know they're inside. You can actually just cut all access right there in front of the MySQL database server and RabbitMQ. And that way, no further damage can happen in the environment. So someone could keep banging on an API or maybe if they had access to Nova API, they could keep trying to get access and do things with it, but they couldn't because there's no way to drop a message off. There's no way to add something to the database. Obviously you wanna monitor those backend services very closely. If you have a MySQL dump running at two o'clock in the afternoon and you know your database backups don't happen until four o'clock in the morning, obviously that's a huge concern. So if you see a spike on a graph or something, go and investigate it. And then one of the most critical things that's often forgotten is use unique credentials for every single service that reaches into MySQL and RabbitMQ. So for example, one thing that we use, we use OpenStack Ansible at Rackspace. And so Nova will have different credentials for MySQL than Keystone will. And it'll be different than what Neutron's credentials are. Same thing with RabbitMQ. They'll be in separate virtual hosts, separate databases every time. So that way if someone breaks into your Nova API server, sure they'll have access to Nova database, but they won't be able to go in the Keystone database very easily. They'll have to find another way in. And so finally the last group here is taking a deep dive into the OpenStack services themselves. So all the Python based services that you're gonna be running in there. And the goal here is know what the valid communication looks like and then alert on everything else. So this herein becomes a challenge. Then you start thinking about, you're like, okay, well I got my Nova, I got my Keystone, I got all this. Well, Nova's not just Nova. Nova is Nova API and then you have conductor and then you have compute node. Neutron's broken up into a lot of pieces with agents and things like that. So you're thinking, man, that's a lot of communication. How do I do that? So what I did was built out an OpenStack cloud and then did an analysis on all the traffic that was coming through. And I'll send these slides out. So if this looks like an iChart, don't worry. You'll get a full-size copy. Oh, that was nice. All right, so wow, even the projector hates it. Make sure it's not wiggling over here. Oh no, oh man. I should've turned the Bluetooth off before I got started. So the idea here is that there's lots of predictable interactions. So for example, if you see all the red lines, that's where services are talking to Keystone. They all talk to Keystone only on two ports. And then if you look, there's quite a few orange lines. That's all the communication going to RabbitMQ. And so those only happen on a couple of ports, depending on if you're using SSL or not. And then the same thing with MySQL or Galera, you're gonna see the same types of connections. And then you think, well, wait a minute, there's like a whole bunch of wacky connections. Yeah, sorry about the craziness going on there. There's a whole bunch of connections, like for example, coming out of Horizon. I don't know if it's loose. I tried to push it back in. There's a lot of connections coming out of Horizon that go into other OpenStack services, but all of these are predictable. So this chart may be able to help you get through that. Oh, it looks like it calmed down a little bit. All right, so if we think about the key concepts at this level, so think about that iChart and bring it down to the concepts. These are all very heavily connected, but the connections are predictable. You know what's gonna talk to what and you know what is not going to talk to what. Oh, go ahead. You just swap it. I'll just quit moving. I'll just stay perfectly like this, okay? Okay, cool, all right. We'll just cross our fingers. So they're heavily interconnected, but you can predict all these connections. So for example, if I see Nova API drop a message in the message queue and Neutron, I mean, sorry, Neutron API drop a message in the message queue and Neutron's L3 agent picks it up. I'm not worried. That's something that it's gonna do all day. That's predictable, that's fine. But if I see Keystone trying to hit Nova on port like 850, I'm gonna be like, what is going on here? Like that's unusual. So these are things that you can go in there and create firewall rules that allow all the valid traffic. And then if you see anything strange like that where Nova randomly talks to something on another port, that's a really good sign that it's time to go in there and investigate what's happening. It could be a bug or it could be a new feature, but it also could be a breach. And so when we start talking about getting tactical, obviously use IP tables to limit the connectivity and alert on everything else. So have it log, drop packets and take a look at those packets, alert on those. And then as I said before, give each service a different, and this time around, give them each a different Keystone service account with different credentials as well. So that way if one service gets compromised, you're not worried about it causing issues with other services. And then finally monitor for high bandwidth usage and high connection counts. So if normally you have a certain amount of connections between Nova and Neutron or maybe between Nova and Keystone, all of a sudden that goes up by a factor of 10 one day and there's not a deployment or something like that. It could be a sign there's a bug. It could be a sign there's a compromise. Maybe someone's trying to expose data within the environment. So now that we've gone through all four parts, let's wrap up. Over and over again today I've gone through these four steps. So analyze what's happening in the environment. Understand what's valid, understand what's invalid and then find a way to isolate everything that's in that chain. So once you know what talks to what and why it does start trying to put walls in between there with very, very, very small holes in them. And then continually monitor those. If strange stuff goes through that tiny hole that you poked in the wall then obviously that's something to take a look at. Or if there's a new connection that's trying to occur that you're not allowing it may be a new feature it may be something you're using that you haven't used before and consider changing those firewall rules. And then finally repeat. So especially when it comes to a breach really try to understand how do they get in? What processes did we mess up? Was it a technology issue? What was it? And go through this process over and over until you get it more and more secure. And so these small changes add up to a very strong defense. And the other nice thing is that all the changes are very small so it's not like you're coming down like the Spanish Inquisition to go find out what's going on in the environment. Because no one likes it when corporate security comes down and says we need to talk to you about your OpenStack Cloud. Like that's I've worked in corporate security so I've had to do that a couple of times. But yeah, nobody likes that but people love it when they say hey can we make this small change? It's part of this bigger goal that we're trying to get to and we've already tested it in your environment and we know that it's gonna work. Those conversations work a lot better than I have a ream of paper that shows everything that you need to go do to your OpenStack Cloud. It's a little bit rough. And so finally if you wanna try this out in a very quick way try out OpenStack Ansible. So it's an OpenStack project that we contribute to and quite a few other companies contribute to it as well. What it does is deploy as enterprise grade OpenStack Clouds with Ansible. And so a lot of these features, oh here we go again, maybe it's the white, man what it is it's like that bright white on there. So a lot of these features are already implemented in OpenStack Ansible and if they're not they're actually on the way to being discussed to get in there. So if you have any questions like roll in there and take a look at that. And then also I'd be remiss not to mention our Rackspace Private Cloud powered by OpenStack which is Rackspace's enterprise grade OpenStack Cloud. So if you go down to our booth in the OpenStack Marketplace we can definitely talk about security or private cloud or anything like that in more detail. And so that's it for me. If you all have any questions I'll be more than happy to answer them. I think there's one mic right here or y'all can yell and I'll repeat it either way, whatever works. Oh yeah, so first question was how did I make that diagram that made the projector go absolutely crazy? So what I did with that one was bootstrapped to cloud with OpenStack Ansible, shut all the services off and then started running T-SpeedUm to do a packet capture and then I just rolled through that data to basically just smash it down into just the single connections and then Python GraphViz to get it into the graph which that was the hardest part actually getting the graph to work out because I don't know if anybody's ever used GraphViz it makes great stuff but when you have a lot of nodes they just kind of go everywhere. So in the second half of that, oh dealing with the fire hose. I think the first thing is you got to capture all the logs make sure all the logs are going into one place and then find out what you want to look at. So pick out the log lines in there that are problematic. So I think obviously a lot of them like when an error shows up those are ones you can trigger on. I know I was in a talk just the other day and I'm trying to remember which one it was where someone had a list of like error log items to ignore. Like things that just show up in OpenStack regularly and are not a huge sign of a security issue or a big problem and I wish I could remember who put the list together but it was fantastic. If I think about it I'll tweet it or something. Cool, any other questions? Okay, so the questions about so mainly what I was focused on was private clouds. What if you were gonna take this same stuff and apply it to a public cloud? I think you could do a lot of things in a similar way. Obviously your scale would be a lot different. So you get into some issues and obviously adding IP tables into areas where it hasn't been before then you start dealing with connection tracking and you have to make sure that you have those counts high enough and that kind of thing. But I think a lot of it is very similar. I think you have to be a little more careful about your logging and what you're gonna capture and what you're not gonna capture and what you're gonna trigger on. And then also you don't know as much about your users. So that's more of a challenge. So you have to kind of figure out you have to look at things more in aggregate I would say. So you may be able to profile what a user normally does in the environment. So if a user normally hits your API or a particular one of your APIs, maybe Keystone let's say, five times a week and then one time they come by and do it 5,000. Maybe that's a way to have like a flag that comes up in the customer's account where someone in support could reach out and say, hey look, is there an issue or are you not caching tokens or like what can we work with you or redirect them to a knowledge base or something like that. But still I think a lot of the same things apply. It's just that you wouldn't know your users quite as well. You wouldn't be able to go and attribute it back or have someone go through a VPN to get there. I know there's some projects like Repose that aim to do that where you can put Repose in front of the API and it could do the rate limiting and some of the filtering and then some of the handoff of the tokens and stuff like that. Would I use them under the log files? So at Rackspace we use just elastic search and Elk Stack and so then we'll go through and pull out certain ones that we know are problematic. I know some folks really enjoy using some of the, like some security vendors have log manager products that will alert on those things. I know some people are using a capacitor that goes along with Telegraph and that whole stack, the Telegraph and Inflex DB and that kind of thing. Oh, so the question is like do you monitor for the successful and unsuccessful log ins in such a way? That really depends because all of our customers are different. So we have some customers where their private cloud is like a dev and test environment and so for them security is important but it's not like on their list of important things it's like towards the bottom and then you have somewhere, that's where they do their critical marketing, like new product releases and things like that. And so a lot of times what we urge those customers to do is have the authentication hooked up to a centralized authentication source and have that source already configured to do all that type of monitoring. So that way the company knows their users better and they would be able to go and do the auditing at that level. So the question was how do we hook up Keystone to a centralized authentication solution? Like I said, it depends on the customer. Some customers have Active Directory and they wanna use that. Some customers just have plain LDAP and they wanna use that. I don't see SAML too often but it is something that some customers prefer to use. What else? Recommendations on web application firewalls. Oh man, I was gonna try and stay vendor neutral except for my own company but I don't know. That's a good question. I've seen some good ones over the years but I think it's, that one's tough. I still kinda go back to mod security every once in a while. I mean it's a bear to configure. Like I mean it's rough but the price is right for sure. But no, there are some good vendors out there but I think it's a little bit less choosing the vendor and more making sure you're putting actual good rules in there and that you understand how your application works. So I think that came up in the security ops session. Yesterday about doing some kind of filtering in front of OpenStack APIs and maybe if we had like an open source mod security list that you could put in front of there. So that could turn into a project at some point. All right, anything else? All right, well thank you all very much. I appreciate it.