 Hi, everybody. Thank you for coming to this talk. I think I've been really subtle with the title of this talk. I don't really want to over-promise too much. So we'll have to see where this goes. My name is Rob Clark. I work for HP Healing on Security. And I've been doing that for a number of years. So this talk is going to be somewhat of a whistle-stop tour of security technologies that are available in Linux and in OpenStack. Different ways to enhance security and to help contain threats. And hopefully, in applying these technologies, you won't completely break your cloud for your customers. So who here is involved in running a cloud for some internal or external customers? Awesome. OK, so all of this should be well known to you. And I'm sure you're all doing everything perfectly well. So why am I here talking to you today? I'm the lead security architect for HP Healing on. And I've been working on cloud security technologies for a number of years now. Much of my focus, at least for the last three years, has been OpenStack. I've been very involved in trying to push security solutions into upstream. So trying to work with starting up the vulnerability management team, which handles security advisories. I'll mention those a little bit more as we go on. I co-founded the OpenStack security group, which exists to provide you with a number of different security functions. We've managed to grow from the two of us back in the Folsom timeframe to, I think, 256 people, something like that, with probably 30 or 40 active contributors. And for those of you that are involved in other OpenStack projects, you know that actually having 30 or 40 active contributors is pretty good. And to my eternal fame, I'm one of the co-authors on the security guide. So I want to talk to you a little bit about the security project. So the OpenStack security group recently incorporated the vulnerability management team and applied to become an official project under OpenStack. And I'm glad to be able to tell you that we were actually accepted a few weeks ago. So the security project is now a horizontal team within OpenStack, much in the same way that the documentation team is. We're responsible for providing a number of different services to OpenStack. So the vulnerability management team remains largely autonomous within the group. They're there to deal with the really nasty things that come in and they have to respond too quickly and they have to respond to them confidentially. They will issue OpenStack security advisories. We issue things called OpenStack security notes. So these are either the can't fix, can't fix, won't fix, that come in through the vulnerability management team. They deal more with design issues that will have lasting effects and also deal with third-party vulnerabilities and third-party issues. We work on the security guide. The security guide, I'll mention a little bit more in a moment. Threat analysis for various projects within OpenStack. At the moment, that's something we're going to have to focus a little bit more on. We have some published threat analysis out for Keystone. A new thing we're doing at the moment is developer guidance. Actually on security.openstack.org, we have a bunch of developer guidance that we've written to allow developers easier access to OpenStack-centric security guidance. So things we've seen developers do wrong in the past. We also have tooling projects. So Anchor and Bandit are these two tooling projects we have at the moment. Anchor is an ephemeral PKI system that I'll speak to a little bit more as the talk goes on. And Bandit is a Python security linter for finding vulnerabilities in Python code and that's actually integrated into the gates of a number of projects for detecting and giving early feedback on vulnerabilities that might otherwise go out into OpenStack. So security guide, I'm mentioning this because all of you that are working on OpenStack Clouds and providing customers with services should at least have the PDF copy of this. It is available in Treeform, although in Furnace, that is a little bit out of date now, but that's going to change during the liberty cycle. It has lots of information around isolating security domains, best practices, hypervisor selection, that sort of thing. And it's a good starting point for a lot of the things that I'm going to discuss today. So originally, I was going to spend a lot of time on this talk going through the different ways that you can secure different OpenStack services. The idea is we're going to walk through, to a certain extent, stages of consideration. So before you install your cloud, how to know who to trust, when you're doing the installation, things to consider and then post-installation, how do you mature your solution? And we're going to run through kind of a menu, really, of things. We're not going to be able to go into too much depth on any single one of them, but the idea is that you're all smart people. If I make you aware of something you weren't aware of before, you can go away and work out how best to apply yourself. So that was the plan. And then, is anybody aware of a big vulnerability in virtualization technology recently? No? Yeah? Okay. So I got a lot of feedback and some questions started coming in regarding this little guy. So the venom vulnerability landed and it was quickly apparent from a number of the private messages I got from people working at other cloud organizations that this was causing some pretty serious concerns. So what I've actually done is spent a whole bunch of time talking, not necessarily about VM escapes, but talking about the different containment and isolation technologies that exist today that would allow you to deal with issues like venom in ways that mean that you can compensate for malicious actors without having to basically unplug your cloud. So we're gonna talk about both of these things and just try and squeeze it all in the time that we have available. So this is probably a triangle that's familiar to a number of you in the room. It's a fairly standard way of describing threat actors. At the top, we have intelligence services. There used to be a little F in front of that, but now we don't have the F in front of it. Oh, sorry, F is foreign intelligence services. So we've got rid of that now. Seriously organized crime below that. So those are the, you know, we're talking really about Russian underground, that sort of thing. Highly capable groups are people like anonymous, motivated individuals, and then script kitties. And so the complexity and likelihood of exploitation tends to go up. There aren't many organizations that could stand up and tell you that they'd be resilient against any attack from an intelligence service, but we'd like to think that we might be able to protect most things from script kitties. So you kind of draw a sort of wavy line here in terms of what you're expecting to protect against. And I think that's something that probably rings true for most people in this room. And the reason you draw the wavy line there is because the cost of the controls that you have to apply to protect against anything above that level become incredibly expensive and will probably break your business. So we start off at the bottom, understanding who's going to be interacting with your hardware and do you need to trust it? Are there supply chain issues, which means you wanna place special things on the provider of your hardware? How are you gonna handle technicians and admins in your data centers? Are your data centers yours alone? Are they shared colos, if they're colos? What's a physical security like? I once did a review of a data center somewhere where someone explained that it was a colo, but it was in a cage and they had locks, but they couldn't tell me when the locks were changed or who had the cages before they moved in. So things like that you need to be aware of. And for a lot of physical access things, there are well understood compensating controls. So established staff or systems that require multiple staff members to be involved, vetting, background checks, also men with guns. I find men with guns are pretty good compensating controls to avoid intentional intrusion into data centers. But this is a technical talk, so what we've got up here is a picture of a TPM. Does everyone know what a TPM device is? Trusted platform stuff. Okay, so there was an excellent talk by Matthew Garrett a couple of days ago at this conference going into a lot more depth than I will about the TPM. But it provides you with a number of functions. It allows you to attest to the state of different things on a system as it boots up. And it provides, amongst other things, a ways for you to verify that your BIOS hasn't been modified, that changes aren't happening with PCIe devices. One of the limits of the technologies today, the point down at the bottom there, attestation is really only at instantiation, so it allows you to check that a hypervisor has come up in a known good state. But there's limited value to you knowing that three months ago your hypervisor was in a known good state, especially when you have things like Venom. So there are, this is where the costs of controls come in. So you can have an infrastructure where you're doing cascading migrations that allow you to restart hypervisors every day and then you only have a window of exploitation of a day where you can't attest to knowing the state of the machine. I know of one cloud technology that had a system like this. Unfortunately, they all now work for Oracle. So network boundaries. This is straight out of the security guide. The security guide recommends that you should have at least these four networks, and actually I think you should have more. But the diagram was already in the security guide, so I just use this one. So public is, you know, your internet, your entrusted networks, your guest network, depending on your deployment, you probably don't want to trust the guest network. There are some situations where you might, but what we see is people want to use the cloud to replace everything. You're going to have everything from your external facing marketing pages to internal super secret source, running on the same cloud, different tenants, but running on the same cloud, and you should kind of aggregate trust downwards in that regard so you don't trust that network at all. Management, again, my preference would be to divide this up. In the security guide, it refers to both let's say cloud management traffic like your RabbitMQ stuff might be going on there, but also provisioning and config management might be traveling over that network as well. So either way, it's very sensitive. I recommend breaking up a little bit. And then your data network, this, again, is very sensitive, but tends to be more isolated because you're mainly talking about your storage backplane for Swift or that sort of stuff. Understanding out which networks to trust can be a challenge. Again, part of it comes down to understanding who has access to your networks. How do you know if somebody plugs into a switch in your data center? Or for that matter, if something just gets cross-wired and all of a sudden your very trusted network is just bridged onto your untrusted network. So understanding those sort of things is important. Understanding how your network at layout logically works is important. There are options for securing some of this stuff. Cryptographic overlays is something we start to see a little bit more of at the moment. I've spoken to a couple of people over the last few days about layering IPsec for these different networks so that if you accidentally plug the wrong thing in or if some bad person plugs the wrong thing in they should just get gibberish. The various options, the various problems with X5 and IPsec and shared secrets and things notwithstanding. So bridging networks is an important point to bring up and there's actually some really interesting stuff we can do nowadays that we couldn't do a few years ago to help keep things separate. So it's very nice to be able to draw these separate networks but unfortunately, especially with OpenStack almost everything has to talk to almost everything else. Which means each one of these individual nodes is connected to a large number of these different networks. Which causes problems in this example the data network and management network are bridged by a compute host but actually it's very likely a compute host would bridge all three. So you have something that's largely interested because it's talking to the public internet or also having the capacity to talk to your private and very trusted networks. So you need to deal with that and there are interesting ways to do it now that we couldn't do before. So one of the big problems that we had in terms of isolation and containing things, especially with OpenStack was that a lot of the controls are available to you. You can only really apply to a given binary a given process and that can be challenging when your entire stack is written in Python because Python is an interpreter and I can't put a whole bunch of rules just on the scripts and enforce that the interpreter will use a certain script. It's not in a robust way that we found particularly useful but technologies like containers and even virtual ends now give you the opportunity to have individual interpreters for individual tasks so you can have a Python interpreter for doing NOVA stuff and a Python interpreter for doing, let's say Neutron but it allows you to keep them separate and apply different sets of controls to each interpreter. But as we all know and as a product manager at a certain organization that I won't mention told me the other day security is easy. You turn on the firewall you enable antivirus and turn on updates. So securing the edge, turning on the firewall this is an example a slight change on the diagram some of you will have seen before and it's a very very sort of finger in the air way to just try and demonstrate that actually the edge is really fuzzy in OpenStack and even if you did have a strongly defined edge in the more typical sort of network security viewpoint the entire purpose of NOVA is to take a foreign compute load and put it deep inside your infrastructure so you have to do reasonably smart things to control it and edge controls just aren't going to get you there. One of the reasons you have to rely on more than just edge controls is advanced exploitation so something some of you may have run into before is getting resistance to applying an update because it's going to slow things down or it's going to slow a release or something like that and the point here is that there are different ways that exploits can be combined they become what a purposeful attacker can do with a handful of exploits is can be greater than the sum of its parts so this is a great example you have a web application running that's easily compromised and that is then able to take over the Apache process without too much trouble now your container here could be an actual container or it could be a VM or something similar but you just need to wait for some sort of privilege escalation to allow you to gain control of that and this is where we hear about these terms like advanced persistent threats so a lot of the time in threat analysis people just focus on what can happen and they have a very short timeline an attacker who really cares about subverting the organization that owns this could hang on to this point of presence in the container for months until they get something like a Linux privilege escalation exploit that allows them to subvert the kernel and then subvert the entire machine now of those of you that are involved in OpenStack closely will know today in most configurations when you own the machine you have access to all sorts of interesting credentials that will allow you to move around a lot of the infrastructure and subvert large parts of the cloud so the ways to deal with this is to encourage defence and depth to encourage people first off at the web application layer to use safe programming techniques and secure their stuff but you can't and shouldn't rely on that in all of our threat analysis and all of our designs at least at HP we assume that all the VMs want to hurt us we assume that everything is completely hostile and it's great if it's not but we assume that it is there are some measures you can do at the virtualization layer to contain things reducing the attack surface right throughout the stack is really important so on your individual nodes there's not really much point in running an entire Enterprise Linux stack when you could just be running exactly what you need to deliver that service and that's something that we see a lot at the moment and hardening kernels in your image libraries is important as well so there's a lot of hardening stuff I'm going to discuss in a few minutes that you should be able to apply to your image libraries that you're providing to your customers and by doing that you make things much easier down at the web application layer and it's saves you a lot of bother so I said I taught you a little bit about about Venom and about breakouts but of course I already did so those of you that were in Hong Kong we had a talk there about hypervisor breakouts the elephant in the room we hadn't had a hypervisor breakout for a few years then and I predicted we would have a QMU oriented breakout within the next 12 months just about on time we had one which is a concern so what are hypervisor breakouts or virtual machine escapes they are wearing a virtual machine through some level of inappropriate access to the services provided to it is able to gain a point of presence in the machine that it is running on these are not new VMware cloud burst was widely reported to enable you to subvert ESX back in 2008 Zen Onage Trilogy in 2011 Virtua Noid it's been fairly consistent for the last few years and of course most recently Venom so the point here is that breakouts aren't unicorns they happen they are actually fairly regular in the wild I've heard discussions that there are more hypervisor breakouts around in the black market my personal opinion is we're unlikely to see many genuine zero days just because they are very, very valuable and by valuable I mean expensive on the black market so we still expect to see these vulnerabilities for the most part only really in the hands of those one or two top tier actors however occasionally things like Venom happen occasionally people don't follow responsible disclosure too well and occasionally everybody's days get ruined the week before a conference so developers are using virtual machines for all sorts of technology we see machines getting compromised all the time and the reason for that is they use it for dev tests we've taught developers that you have this throwaway resource you no longer have to wait weeks for your IT department and because they're easily available the developers are often not as diligent about protecting them and looking after them as they should be because as far as they're concerned if it gets compromised I'll just kill it off and create another one so that creates a real challenge and that's really an educational challenge for organizations running clouds and for those of you running public clouds you just start with it virtualization provides access to a lot of devices that are on by default so again when we were talking about this a couple of summits ago I was describing the things like bluetooth stacks that if most of you look at how you have your virtualization stack configured today you probably have bluetooth compiled in and you probably don't use much bluetooth in your data centers so you may want to turn it off before someone owns you hardware reservations are an interesting strategy so this idea that tenants will pay you some premium to reserve all of the metal that they're running on or at least book it out just for themselves so that's always interesting and isolation and containment is really what the next few slides are going to be about so Venom this is the lovely graphic that the Venom guys did and here you can see a web server has already been compromised they've already escalated privileges to the virtual machine they're then able to leverage a vulnerability in a floppy disk controller to gain a point of presence in the QMU process which was probably for most people going to be running in LiveVert at least if you're KVM it will be LiveVert now depending on how you have things set up that could be as far as the exploit goes contained by whatever your mandatory access control framework is and that should be it but actually the empirical evidence over the last week and the panicked emails they've had from people in various organizations leads me to believe that maybe people aren't doing that as well as they should be now the the reason the vulnerability doesn't stop there is because from that point you can so from the point of having a point of presence in the QMU or LiveVert process for a KVM machine you can access probably the other virtual machines that are running on that system and the various things that they the various resources they have available to them but you could also escalate privileges to subvert that entire node and when you do that like we said earlier you gain access to all the configuration information and privileges that are on that box so I'm going to talk to you about containment and I'm going to have to start speaking a little bit faster as well so I apologize for that so containment is all about limiting the scope of a VM breakout we know VM breakouts happen there's no point pretending they don't so how do we contain them so one way to do that and I would say if you're going to invest time anywhere invest it here is in mandatory access controls that is the SE Linux logo that is not the app armor logo as far as I can tell there isn't one but that guy's really cool so they give you a few capabilities so you can define how a process should behave you can block and alert when a process steps outside of that behavior and things you can do there you know you can define how a process should access files if you have a process that never reads anything from slash opt for example and then it starts reading you can control that and it also gives you access to control Linux capabilities as well and primarily there are two and yes I know there are things like smack and I know GR security does mandatory access controls as well but realistically most people are going to probably be looking at these two SE Linux gives you object label based security it's very prescriptive controls and it's highly complex I think that's probably fair to say you can't sit down and look at an SE Linux policy and understand it without having first read how SE Linux policies work it does give you some excellent control app armor is path based it's much easier to deploy and there's I think it's probably fair to say better automated tooling around it today some people will say that it doesn't give you the same granularity of control around what what processors can do and there may be a tradeoff to make in your organization but just to illustrate on the left we have an FT FT PD app armor profile and on the right we have the start of an SE Linux one now the one on the left I can probably read I can understand that it's allowed to read from your random for example the one on the right I struggled to read unfortunately I also struggled to fit it on one slide or two or three or four so my point here isn't necessarily that one is better or worse than the other but sometimes a less perfect system that everybody can deploy is better than a perfect system that nobody can which does kind of sound like I'm trying to sell that armor I'm not because my friend my red hat friends will beat me so are we secure yet we have mandatory access controls this came up in the ops security discussion yesterday how many people well I have a lot of operators here okay I'm going to assume that you all have some control in place because I want you to have to admit that you don't who decided despite the fact that they have great security that they would be best off restarting their their QM new stuff and clearing out with a new version in the light of venom come on be honest I know it's more than that because I saw some of you in the ops room awesome so yeah and that's perfectly reasonable and even if I had all these awesome controls one of the security controls we always push for is the ability to do rolling updates anyway these controls don't fix things like venom they just buy you time to react without breaking your business posits capabilities are interesting so posits capabilities give you ways to grant processes specific privileges who here thinks that in order to deploy open stack effectively you have to massively over use things like sudo yeah so my friend in the NSA gets it he's out it's fine so use of privileges within open stack is at the moment less advanced than it needs to be again previously this would have been hard to do so posits capabilities allow an administrator on the command line to say this service can do the following things as a privileged user without having to give them full privileged access to everything a trivial example is if you were to use some basic a web server let's say the number of times we've seen something like Apache being run as root is fairly terrifying and yes I know there are drop privilege things and stuff but go back a few years and they weren't there posits capabilities allow you to say this process can bind to a privileged port but it can't do anything else as a privileged user and there's a couple of good ones I pulled out here so binding to services performing some system admin tasks is interesting p trace so some debugging and in fact some security tools can do some interesting things by doing p trace snooping but you don't necessarily want to give them full privilege to everything so understanding where you do need privileges and granting them appropriately through posits capabilities is a great way of reducing the abilities of an attacker who has gained some level of access to your system set comp so is everyone aware of set comp no great ok so set comp is a system that allows a running process to drop access to most system calls so the idea is that you can run in a secure mode and its first iteration which came out quite a while ago I think it was about 5, 6 years ago 2.6 once a process was started running so when you run QMU for the first time it has to do a whole bunch of things it has to read files, it has to set itself up once it's done all its file descriptors are probably open it's probably made all the major system calls it needs to and it could drop into what they call secure mode which is now referred to more as mode 1 and by doing that it could then only have access to these 4 4 sys calls with already open IDs mode 2 is more interesting it took a lot longer to develop and you can use Berkeley packet filter like descriptions to say to set a stage so your process runs it does all of its setup and if it needs more than those 4 sys calls before you were kind of stuck it would have to carry on running with access to everything whereas now you can say well I need these 4 and 5 more and I need to use them in this way and then switch mode now the interesting thing about setcomp is once you've switched mode you can't switch back so that process from then on can't make any privilege sys calls it can't make any calls to anything else and this is where you're going to find interesting potentially privesque opportunities so this process can no longer escalate privileges by abusing sys calls that are found to be vulnerable because it can't call them GR 2 including packs so firstly I think the packs logo is awesome that is their actual logo that's all I do for talks it's just google images it's great it's a whole bunch of kernel security enhancements that do a lot of interesting things so one of the major things is enforces a strong least privilege policy system GR security has its own mandatory access control framework I don't know it well enough to comment on it on this talk apart from to say hey it's there and if you don't like APRM or SE Linux or smack then there's GR security as well prevents arbitrary code execution in the kernel and randomizes the layout of sensitive kernel structures again this is to make achieving a Linux privilege escalation much harder it also enforces a better mode of a better mode of blocking I forget the term now but basically leveraging the NX bit so that buffer overflows can't so that CPU won't execute instructions unless they're actually marked appropriately to be executed and it will emulate that if your hardware doesn't support it and also memory read write controls so this is really interesting in this model your memory can only ever be writable or readable at any one point and it's very clear about those crossover points so a compromised service that attempts to read when it should be writing will get blocked and all sorts of alarm bells will start ringing so the idea here is to stop privilege escalations both in applications and at the kernel level an interesting technology you can use here and this is leveraged in containers a lot is Linux namespace isolation so isolation allows you to have a process run in a certain way in fact this combined with C groups and a few other things is basically the foundation of what LXC provides for you to actually run containers which is the foundational technology behind Docker and insert your other container technology here but you can also leverage this in the user land in fact does anyone know where this is used in OpenStack today so Neutron uses namespaces for providing full network stacks that's really interesting so the network namespace allows you to provide a process with a virtual interface and basically has a full stack there including IP tables so you can start coupling firewall rules and IP tables to individual services rather than to individual nodes so you no longer have to have sort of I've seen installations where you have one basically one firewall rule that has every possible service for every possible node and just pushed out everywhere because it's the best way to not break things and this way you can bring these things much closer to your running services and like I said before if you approach deploying OpenStack in a way that gives you multiple interpreters running in different places then you can start attaching their own stacks to them you can start so your NOVA one will only have certain IP addresses allowable perhaps you can really get quite granular and do some clever things there are a whole bunch of namespaces this isn't exhaustive some are more mature than others so the username space is interesting in that it provides mappings from container processes so that even if someone were to gain a point of presence in a container where they would expect to be running with an effective UID of zero they're actually not in terms of the system overall so namespaces are a very interesting way of providing different types of isolation and extending security for different processes that are running I mentioned control groups a second ago so C groups are how you isolate an account for resource usage from different processes amongst other things it's how you make sure that you deal with noisy neighbors in KVM so it's just a very good way of keeping track of what processes are doing lots of people run multiple OpenStack services on the same node ensuring that one vulnerable one even if it's just caused to loop infinitely can't take down your entire cloud right so a quick review of containment here so we've gone through app armor and Svert which is the combination of policies that are provided for SE Linux although the policies are kind of hard Svert exists already which is pretty good for working with KVM and I think some Zen stuff as well Set Comp very interesting for blocking access to incorrect tool, C groups, namespaces, GRSEC there's also a bunch of very good content on this in the security guide and there's some chapter references there for you and obviously I'll make these slides available so I'm going to have to very quickly walk you through cloud configuration and I'll just have to cut all the jokes out so TLS everywhere, it's really hard it can be quite messy we're working on ways to solve that, one of those ways is the anchor project it's a passive replication system that means that provisioning certificates becomes a lot safer and you can say cryptographically and with certainty you know all the certificates in use within your organization for a certain time and I skip over the other cool stuff the other thing is it gives you protection against some crypto attacks because we replace private keys every time we get new certificates which means that things like Heartbleed wouldn't cause you any problems if you're using a system like anchor data at rest, there are two opportunities available to you right now one is hardware so I'll wave the HP flag, if you want to deploy everything data at rest with no impact on performance, there's a product called HP Secure Encryption that has a negligible effect on read and write time and you can deploy that all together nicely other opportunities for encryption exist like LUKS problem with that becomes managing keys at scale, where do you store the keys and how do you make sure they get to nodes when those do things like spontaneously reboot there are two ways that I think you can do that nicely, the first one is to inject keys through your lights out management system the second one, which is something I haven't discussed with anyone yet, is I want to work on having a bootstrap and disk that leverages PYKMIP to talk to a KMIP server and pull a key back to unlock the disks on that machine and then boot the machine native level encryption is coming in Cinder Novorephemeral Swift and Glance after this talk today there are both Swift and Glance design sessions, if you're interested in where that's going and you feel you can contribute to the design session, please go along entropy issues you have to be clever about how you're handling crypto in terms of virtual machines short version in terms of an unprivileged instruction you can make from a virtual machine or go down through the VRT stack and pull back random for you if you don't trust RD Rand then watch Matthew Garrett's talk from a couple of days ago where he speaks a whole bunch more about how to do this and combine it with TPM so open stack access controls, you have policy.json files go and look at them they probably need to be fixed for your deployment I think there's some advice on this in the security guide there's also a great talk from Adam Young a couple of days ago on the future of where this is going called dynamic access control a note on open stack tokens although you can get a scoped token in terms of your threat analysis and how you use the system be aware that they can be re-scoped this is a design feature it's not a bug it is confusing you do see people expecting a nova token to only be used for nova actions go and have a look at OSSN 42 which goes into some depth on this and Nathan Kinder has an excellent blog article on it as well intrusion detection Dan Lambert spoke about this a couple of days ago there are different ways and different places you can deploy IPS today my personal favorite is to span off a secondary OBS on a compute node there are other ways of doing it tapping off the bridge interface is one it would be nice to see some real progress on this as far as I'm aware firewall as a service hasn't really become a thing so everybody's doing their own thing I'm very interested in setting up some reference architectures because I think everyone's actually interested in solving this problem host IDS other IDS exist OSSEC will give you all these nice properties and allow you to protect your system and detect when people are doing bad things other IDS exist credentials and shared access so I should have mentioned this earlier we're about to run out of time don't use the default passwords you can do clever please if you want to get extra points use Chef or Puppet so that you're pushing out different credentials to different nodes and updating things like the nova database so you can all talk to them independently something funny about how security is really expensive and I should get paid more breakouts not unicorns do smart things to protect against them defense in depth they only buy you time and open stack hardening don't use the same passwords that come with your distribution there you go