 So my name is Rob Clark. I'm the lead security architect for HP Cloud. I've been with Hewlett-Packard for three and a half-ish years. I'm a distinguished technologist in the cloud group and been responsible for most of the security work that we've been doing there. I want to talk to you a little bit before I get going just about the OpenStack security group. We've been doing a hell of a lot of work in the last two or three years to improve the state of security in OpenStack. We've got membership from right across all the big players and a lot of the smaller players and independent people involved in OpenStack today. We're involved in a whole bunch of different initiatives. We wrote the OpenStack security guide. We issue OpenStack security notes which are pieces of security guidance a little bit like advisories but they're more little gotchas in OpenStack, things that might cause you to end up deploying things insecurely unless you're careful, things like that. We work on threat analysis which we'll talk to you more about in a talk later on today. We've been working a bit on static analysis. If you guys go and have a look on Stackforge for a project called Bandit you'll find something that'll point out some of the things you've done wrong in your Python. Which is pretty cool because there isn't really any decent static analysis security oriented static analysis stuff for Python. So that's pretty cool. We continue to grow. We're easy enough to find through Google. If you're interested in security or anything we talk about today then please come drop us a mail, join the group and get involved in the conversation. So I'm going to talk to you a little bit about OpenStack. Some of these next few slides might be familiar to some of you. In fact one person in the room stole these slides. So we'll talk a little bit about some of the challenges for securing OpenStack. So OpenStack seems relatively simple. We have a couple of services. If we apply sort of fairly standard security approaches to locking it down we can understand where our different data flows have to be. We can create little wall gardens, create little protected areas where our HA stuff can talk to one another and all the underlying services can interact. Make sure all our data paths run across them and yeah OpenStack is secure. Everyone can go home, close down the security group. Good job. Unfortunately, this deck should have come with a sarcasm warning. Unfortunately OpenStack isn't quite as simple as that. You end up with a lot more data paths. So you end up needing messaging for just about everything. You end up needing billing for everything. Even if you're not monetizing your cloud you need to know how people are using it. You end up with lots of different interconnecting services. The wall garden model doesn't work so brilliantly when everything has to talk to everything else. In fact, it just doesn't work. So we can't use wall gardens. We can't very easily use network segregation. You can attempt to use ACLs but on large deployments you may actually overrun the ACL tables on the hardware you're using. Software based network stuff isn't really there for us yet. So we have to go with a different approach. So one of the things we considered doing was encrypting as many of the individual connections as possible. Most OpenStack services you can put behind TLS. The ones that you can't, you can a lot of the time position behind a TLS terminator that will do a lot of the heavy lifting for you. So I'm going to talk to you a little bit about TLS. But basically we ran into a few problems. So deploying TLS isn't very hard. People have been doing it for a long time. Deploying TLS and managing all the certificates that are in there can be quite tricky. So TLS provides us with two things. It provides us with confidentiality and it provides us with message integrity. And from a slightly higher level provides you with some measure of authentication as well because you know who you're talking to. Notice I'm not saying SSL TLS, I'm just saying TLS. Anyone who still says SSL gets poodles thrown at them. So it's, you know, the technology has come along way from secure sockets layer being a weird thing that Netscape were working on that was horribly broken to a thing that the community that was working on that was horribly broken. And then we come through TLS up through to sort of 1.0, 1.1. Basically everything you're doing should be at least TLS 1.1. Some people will want to use 1.0 for compatibility reasons. So you make your choices. TLS SSL or generally pivots on X519 certificates. You're generally using v3 certificates. They provide you with a bunch of things. So you get a certificate and I'm just doing a quick review here. You can tell who it was issued by, whether it's just by somebody you trust. And it has a bunch of properties like you can have your fully qualified domain name in there, subject alternative names for different machines that might need to use the same certificate. Describe how to check revocation. I should have a big circle around it for this talk. Describe how the cert can be used when it's valid from and to. So when people normally talk about a certificate authority, they're actually talking about two things. Generally they're talking about registration authority and a certificate authority. So just quickly who's ever sort of had to request a certificate from Verisign or Symantec or something like that, right? Okay, you create a CSR, blah, blah, blah, send it off. And then you have to fill out a form that says who you are and what company you work for and why they should give you the certificate that says robhp.com, whatever. When you're putting in all that information, you're talking to the registration authority. The RA is who decides or what decides whether or not you should be given a certificate. The CA is relatively simple. It signs a certificate, makes an entry in a database to remember it's given you a certificate and, you know, put you on their mailing list and stuff. But the RA actually does a lot of the heavy lifting in terms of the actual trust that needs to go into the certificate. In terms of what a certificate authority and an RA do is assert that you are who you say you are. And they give you a certificate that says I am who I say I am. This person has verified it. You should trust me. So you request a certificate, fill out the CSR, send it off. So when you're doing it internally and actually at more commercial CAs and perhaps people would like to admit, a lot of the time there's a person on the other end applying a script, literally following a script on one screen, making sure that the data matches on another, going through performing various manual checks. And this is often, like I say, an individual and a corporate scale. Policies can be difficult to implement sometimes. They tend to be fairly corporate in nature. So they might not be specific to the applications you're trying to deploy. Like I say, these people are normally following a script. And if you look at, say, all the people here that are deploying private clouds or even hybrid clouds, the person who's administering your certificate authority and acting as your RA or as the human that underpins your RA, it's probably not their day job. In fact, it's probably one of the parts of their job that they hate the most. And you don't want to trust all the crypto assurance in your platform to somebody who hates the job that they're doing. So after a certificate has been issued, you may turn out that a certificate is not required anymore. It may need to be refoked. Like I say, APK admins generally hate their jobs. They have a lot of power in the organization, the RA, and things should have a decent audit stream. But how often it's checked is sometimes questionable. The person managing that platform does have the power to create certificates for start.com that will be recognized by everybody that trusts the CA. Certificate stapling and things aside. And a couple of years later, if everything hasn't broken already, the certificate expires and you didn't have a system in place to deal with it. And that has broken more people's organizations than people would admit. Revocation. So I'm not sure if anyone is like a SciShow fan, but I don't think that means what you think it means. Or to put it another way, revocation doesn't work for how we try and use it. The only places where revocation works well right now are in browsers. If you look at all the Python libraries we're trying to use, sometimes I'll implement CRLs. If you try, so it's back up slightly, so certificate revocation works in one of two ways, really. Certificate revocation lists, which are signed lists of certificates that should no longer be trusted, because they've been lost to compromise something like that. CRLs stop being used as a sort of a general web technology a number of years ago. You won't find, if you open up any of your distros, any of your laptops, look on there for your CRL list, you won't find it. It's not there for any of the certificate authorities you trust. And that's because they got massive and huge and too bulky to distribute. So the online certificate status protocol is a UDP-based lightweight protocol. You can send off a message to a machine saying, do I trust this certificate, or should I continue to trust this certificate? And you'll get a sort of a yes, no, come back later type response from them. And that's cryptographically signed. And that information, the OCSP responder, is in the certificate that you're trying to check, so you can look that up. Unfortunately, the libraries that we use in OpenStack and most client-based SSL libraries don't do OCSP very well at all. It's not, it's great for web browsers, generally they tend to support it, but a lot of the time they don't. I want to talk a little bit now about the kind of infrastructure you need to do, high availability, PKI generally, for something like OpenStack. So simple PKI, you have your root certificate, intermediates below, and then various servers using the certificates. This looks reasonably familiar to the people, right? Okay, except normally, you need your intermediate certificate authorities to have some measure of availability, so you need to replicate them and they use a database or some sort of RPC between to make sure that they maintain state and they know what's going on with certificates. If you want to do CRL or OCSP responders, they all need to hang off the same database as well. Your database becomes a single point of failure, so you need to replicate that, and you need to replicate your responders in your different availability zones, and then you end up with more intermediate. It just gets kind of messy. When you end up having lots and lots of servers, so take this as a private cloud, you've got many, many servers, your PKI admin is going to be very, very unhappy if you're trying to provide three or four certificates for every machine in there. So infrastructure heavy, licensing can be very expensive, so if we look at that, if we're using a proprietary certificate stack, like if you're using ADCS or some of the other trust services available, then cost can get quite high. The lifecycle can be kind of messy. If you take any one of those services and look sort of a year down the road, you can tell what certificates were issued and when, but you don't know what's still required, what's in use. You don't know if the machines that you gave those certificates to still exist. You don't know if the admins who have those certificates have done bad things with them, and you can't make assertions about the lifecycle of the private key that was used to generate the certificate requested in the first place. So scale is hard. As far as we're concerned, we don't trust the revocation stuff that's there, and it's very difficult to make it work. So we decided to have a look at a redesign of how we would do certificate provisioning and revocation. We were gonna do this using only existing libraries with all the faults that they have. We wanted to make it simple and kind of make it feel like open stack, and we wanted to make it open source. So the reason I've added lots of humor in here is because I'm about to get onto how we did it and lots of people won't like it. So we start with one fundamental thing that was born out by our testing, which is that revocation is broken and expiry isn't. That is to say that revocation in the types of systems we try to deploy and protect with TLS doesn't work as well as we'd like it to. We don't consider it to provide us with a high assurance model. Expiry generally works pretty well in every library we've found. Obviously it requires times to be relatively accurate. So we decided system that we were referring to as ephemeral PKI and we have an ephemeral CA. And the whole system pivots on us giving out very short lifetime certificates. So we give you certificates that are measured in hours rather than years. When you do that, your certificate admins head explodes because he's now getting N thousand requests every few hours for all the machines that need certificates. I'm gonna go through a little bit more how that works. But it gives you a few interesting properties. We end up with a system that we can scale really well. It scales in the same way that you scale everything else in open stack. It can be siloed, so it can be deployed and revocation will still work in low connectivity environments. So you no longer need to have centralized certificate management and replicate OCSP responders everywhere or create RPC interconnects so that different bits of your data centers can talk to one another. We have a diode audit stream. So I mentioned a few times that this system is kind of stateless. One fun fact that people aren't like, it doesn't know who it gave a certificate to. Doesn't know, doesn't care, doesn't need to. It does have an audit stream. We always know who was given a certificate and when, but the system doesn't rely on that to give out certificates. So we accept that this is a kind of an interesting way of approaching things. And it requires you to accept a couple of things. And we get a hell of a lot of benefits out of it. And what I'm gonna do, instead of trying to tell you all about it now, I'm gonna run through a little bit of how it works. I've not had, there's a lot of interesting deployment modes you can use with this. I'm gonna run through some of the more basic ways you can do it and hopefully convince some of you that this isn't as scary as you might think. So we have a very simple software stack. We have a REST interface, PCOM based API, just as you would expect in OpenStack. We have a pluggable authentication layer. We have a decision engine. So whereas your human RA sits there and looks at the information you provided and runs through that little flow chart, we figured that scripting is a thing. So we went down that road instead. And then we have a stiff cut authority, which unfortunately because of some of the challenges with M2 Crypto currently relies on a slightly modified version of the library, which is something that I'm hoping Paul and his friends will fix for us in the very near future. Good job. So this is gonna be in the most simple configuration. So think about this in a dev test environment or actually think of it in a dev stack environment. So one of the things we can do with ephemeral is because it's a very lightweight stack and doesn't require anything that isn't kind of OpenStack to work, you could deploy it into dev stack if you wanted to and actually have full certificate services running inside a dev stack. So CSR gets created and we do this on the server using certmonger. So we've written an extension plug anything for certmonger so it can talk to ephemeral PKI. And so certmonger creates in this system, we'll create a new private key and create a CSR and send it to our server along with some authentication information. The REST interface receives it, punts it to the authentication system. Authentication says, yep, that's fine, punts it to the decision engine, applies a bunch of different rules. Now the rules are written, they can be as pervasive or as restrictive as you want, they can check as many things and reach out to as many other systems as you need them to. So one of the things we can do here in an automated way, so your decision engine, let's say a certificate comes through for NOVA, you can go away and check CMDB and check that the reverse DNS, so you get the IP for the FQDN that was provided by the system that was saying it was NOVA. And if that resolves to something else, if that turns out to be a machine that you created with the purpose of being a Swift box or the horizon interface or something like that, then you can say no, go away, I'm not gonna talk to you again. If it passes all the checks, certificate authority will sign it, you get back your certificate. So plug in authentication and it's most simple form, we can just use shared secrets. So if you've got a small deployment and you just want to have TLS operating between the different points because you want to do that. So if it's sort of a testing environment or a very small deployment, you can use shared secrets. They're not great, but you can't pretend that's not how OpenStack configures everything right now. So at the very least it's idiomatic. We can do LDAP lookups. So if you have service accounts for your machines, we can do LDAP lookups and we can constrain them based on various groups. So again, if machines, if your service accounts are grouped into NOVA or grouped into whatever, then you can check. And if a request is coming from a Swift box for a NOVA certificate, then you know that you shouldn't give them a certificate. If we're using Keystone for identity, we've been looking at how you might use it with the Keystone service. There's some potential for chicken and egg type problems there, depending on how far you want to extend the use of ephemeral PKI, but it is capable today of talking to Keystone and getting back authentication information. Reverse DNS verification. This is something that people wouldn't normally do. So it'll get a certificate. And if it gets a certificate, because obviously it's getting certificates from the machines that request them. So if it gets a certificate from a machine, it'll open up the certificate, have a look at the name that's been requested. It'll do a DNS lookup, get the IP, and then it'll check the log from the RESTful interface to find out which machine requested it. And if the requester came from a different IP to what the FQDN resolves to, then you can have it configured to not give out a certificate for that. Now that's pretty cool. That's not something you can normally do with PKI easily. CMDB system, Rah mentioned this already. Reverse IP in valid ranges, so it can check that it can be told how big your open stack deployment is, what IPs have been given to it. If something outside of open stack is requesting certificates, then it will know that. And you can check names match schemes. So I spoke a bit about authentication. It will know, it can know what roles different machines play in the system when they request certificates. And you can set rules for what certificates or what domain names look like for machines in different roles. So if you have a prefix or a naming scheme, now open stack's never had a solid naming scheme, which is a bit of a problem. But if you have an agreed naming scheme for your deployment, which you probably do, this can enforce it. So it'll only give out certificates that fit within a naming scheme. So in Nova, it might be NV dash, whatever. And you can check prefix and post fixes for that. And it's an extendable rule set. I mentioned a stateless, but we do basically have a one-way output to your given audit server, which is really cool. Because what that allows me to do is say, I want to know, let's say we have a 24-hour life cycle on a certificate. I want to know exactly which machines have a certificate that is valid to be used today in my entire environment. I can know that. If we have a problem like Heartbleed, which was a problem for a lot of people, the correct response to that was to assume that all of your certificates, all of your private keys were compromised. With this system, we'd have to update OpenSSL to the unaffected version, and then wait for our time out on our certificates. And we have a cryptographically verifiable, non-Heartbleed affected, private key protected system within whatever our certificate window is. That's it, that's how it works, it's easy. Heartbleed, I lost a lot of nights over that stuff and we wouldn't have done if we'd had this deployed today. We probably would have lost sleep on other things if we'd had this deployed today, but nevermind. Because it operates in a very open-stack-flavored way, you can load balance it the same way you load balance anything else in open-stack. Because it's stateless, the two systems don't need to talk to each other. They just need to be able to talk to an audit system where they just dump it out. So that's whatever your security event incident management or whatever you want to do. The rule sets can be customizable depending on deployment. So when you think about having a certificate authority for open-stack, you think about having something central that everything talks to. And you can do that and you can write rules that explain to one or one HA deployment of ephemeral CA. You can have rules that say how your entire cloud is supposed to work. But that can be quite tricky. We operate a pretty big open-stack-based public cloud and our Nova guys don't. We have different teams that have different priorities and different challenges. So what we can do is we can let them build their own rule sets for their own deployment of their PKI. So what you end up with then is you may have ephemeral CA running for Nova and ephemeral CA running for Swift, for Neutron. And they provide all the certificates for all the services that need to interact within that service, or within Nova or within Swift. Those teams would be responsible for writing those rule sets. And we, and I work on the security team, HP, we would review those rule sets as part of the operational security review for that service that we do periodically. But that gives them some freedom to get really creative and get score bonus points by putting in really clever rules and checks. It also means that when they have their own ephemeral CA, they'll only have that trust anchor installed within that service. So only Nova machines will respect the Nova CA. Only Swift machines will respect the Swift CA. So if there is a rule failure, say within Swift, things break, people get things wrong. If anyone thinks that doesn't happen with security stuff, then you're wrong. So with localized trust anchors, it means that if Swift has a certificate failure, they wrote a rule wrong. Somebody breaks into Swift through some other mechanism, figures out, hey, there's this cool thing, and manages to get a certificate for something else, for start.google.com or Nova dot whatever. They could only compromise what was going on inside Swift because that trust anchor isn't installed on any of the other machines in any of the other services. So you can have localized rulesets, localized exposure of compromise, which is, again, pretty cool. You can't localize those sorts of things without deploying entirely separate PKI stacks normally. Unfortunately, if you do that, you also have plenty of services in OpenStack that need to talk to each other in secure ways. So you still need a high level instance of the ephemeral CA so that Nova can talk to, say, glance when it needs to pull down images or do stuff like that. And the way we would do that is we'd simply have a strict set of rules written for the high level CA. So I need to apologize a little bit for the timing and for some of these slides. I was supposed to be presenting this with a colleague, so I have no idea how far through we are on that. Status today, Authent works with LDAP Keystone, DNS IP-based rulesets. You have basic group-based rulesets. The code is about to be released for review. And this and everything else that we will put out upstream like this will come with AppArma profiles. Any of you that want to take them and turn them into SE Linux, that would be great. I'm not gonna do it. So next steps. So this kind of stands on its own at the moment, but there's no reason it can't stand behind Barbican as another place for getting certificates. We already spoken to the Barbican guys about the fact that our system is slightly different to a typical certificate requesting system. So you ask for a certificate, you give it your information and you either get a certificate back or you get a knack, go away, we don't like you. So there isn't this period of submitting a certificate, waiting for it to go through the RA process and come back to you. And I think we've worked out that that will be okay, maybe with a few tweaks. I'd like to see this adopted by the security group and make it a security group project that we can sort of continue to develop along with the people already working at HP. Obviously additional rule sets and authentication methods. So what we have designed is a system where you have short life certificates. We actually typically would look at sort of a 12 hour window that seems to work well in our testing and doesn't break any individual operations in OpenStack. Some services deal very well with having their certificates replaced by certmonger, because certmonger's gonna sit there behind whatever your service is doing and it's gonna rip a certificate out and put another one in there and hope you don't notice. Some services do notice and they get very upset. So for them, we're generally looking at placing them behind whatever your favorite TLS load balancer bit of software is. Pound is really nice for doing this and other things as well. And they're generally very resilient to having their certificates swapped around because they're meant to work over many, many years and we're just doing it over a few hours. So in a system where you have a, let's say a 24 hour certificate life cycle, we'd recommend, we would have our certmonger systems configured to request a new certificate probably every eight hours. So you still have that maximum permissible period where a bad certificate could be used, which is not great. But like I say, revocation doesn't work even if you're using OCSP. Your OCSP responses are probably stapled and they're probably 24 hour or 12 hour windows anyway. So we're really looking at parity system. The reason you request more often is because if for some reason the ephemeral CA has decided that it hates you, it gives you another 16 hours in this example to raise an alert and to have an admin come and work out why the ephemeral CA hates you before your system starts to fall down because we are talking about, if something went horribly, horribly wrong with the ephemeral CA, or if you wrote your rules wrong, everything could stop working within 24 hours if everything stopped getting certificates. Which is great, actually what you'd find unfortunately with a lot of open stack is it would carry on working because the certificate validation on the other end doesn't work so well, but we're gonna carry on working on that. But yeah, so key points, kind of fixes provisioning, kind of fixes revocation, stateless, scales really nicely, open source, easy to deploy. Questions? Yeah, you could absolutely do that. We chose throughout all of this, the design principle has been to not try to be clever. It is, yeah. I mean, so another reason we stayed away from message queues is because message queue security continues to be a horribly broken thing in open stack. This is a nice easy way of doing things. I have absolutely no objection to looking at a message queue model for managing certificate life cycle, getting things to and from the ephemeral CAs, no problem with it at all. So the question was what's the validity of the CA and how does that get changed? So we kind of punted on that a little bit to say, it's basically the same, so whenever you deploy a trust anchor, you make a risk managed decision around how long that trust anchor's gonna live. And we don't take any more responsibility for that than you would in any other PKI system. In terms of distribution, we assume that you are using some configuration platform to do this, so you're using Sheffield Puppier Ansible Salt. You're doing something like that that manages these little bits of nitty gritty, which is also how you would need to get the trust anchors onto the boxes in the first place. But of course, the trust anchors aren't, they're not cryptographically sensitive. You know, if you fiddle with them, they won't work, so there are lots of open ways we can deploy them to machines. So if someone's got ideas around sort of how to manage that bit of the life cycle, I'd be more than happy to talk about them. But yeah, good question, thank you. Yeah, we've not had a conversation about it, so did everybody hear the question? The question was this is kind of similar to some of the early discussions around how to fix message Q security to do with how had refined cryptographic and strong integrity stuff around messages on the RPC and OpenStack. Nowadays we refer to that as kite. We've got a couple of kite guys here. I can see where there would be overlap. I think it's a good thing to point out. It's not a conversation that we've had. Although one of the guys involved with kite is also involved with infameral PKI, so some arm twisting could probably be done. You don't, you trust your rule sets to make sure that the right certificates have been delivered to people. We just, at the moment we do them randomly. I can't remember if we hash on the last one or something, but we don't care about the certificate IDs that go out because certificate management generally is done so badly. I mean, one of the reasons we accept a certain period, we say, well, revocation kind of doesn't work anyway. So we accept a certain period where it won't work. And we accept that because, you know, I think one of the largest challenges for organizations is actually dealing with all the unused certificates and all the other stuff that was knocking around. So we know in our audit stream exactly what certificate ID was given to what box, when, and why. So it's all in the audit stream if you need to, you know, if you've got a forensic investigation where you see a weird certificate's been used somewhere, you can look in your scene platform and see exactly what machine that certificate was given to, when it was given to them, why, how they authenticated, you can see all that in the audit stream. So we're talking about the, like, the certificate fingerprint, right? Yeah, serial number. Yeah, that's all we do. So, you know, we have n number of certificates. We know the life of the certificate is actually very short, so we're not really very worried about collisions in that space, but yeah, we only pivot on the fingerprint. Yes, it's a very good question. So I think there's really two points to that. The first one, you know, is response time deterministic? I would say whatever the HTTP timeout is, you know, it either will give you a backup certificate or it won't, or your connection will fail. Ideally, it either decides yes or no very quickly. The other is around external workflow. So while we don't have external workflow in the way that you would have in a normal PKI system where, you know, there may be additional verification steps like you would have if you hadn't got a public certificate today, there are opportunities for issues where, you know, when you talk to the CMDB system, if that's hanging for some reason, you don't want things to hang all the way down. So we just have sensible timeouts for a lot of that stuff, which again is why we recommend, you know, if you're using 24-hour or you're using 12-hour certificates, send a new request every sort of third of that time period. So you're always grabbing a new certificate before the old one runs out, because if the machine doesn't get a certificate or raise an alert through whatever your monitoring platform is, icing or whatever, that'll pop up in whatever your operations room is and someone will go and look at why a certificate wasn't granted. And then they can have a look in the audit log and see, well, this exact request came from this machine and it turns out someone didn't update CMDB, so we didn't know about the machine anymore, which is great because that doesn't normally happen. Normally what happens is you get drift of all these systems. I don't really want our PKI platform to be what enforces sort of compliance to how people should be doing CMDB and other things, but at the end of the day, if you want a well-configured system that hangs together properly, that's how it has to be. Sorry, no, go ahead and then go to the back. I wouldn't say any that we're happy with releasing right now, only just stuff that's been kind of hacked together to make it work. We've been kind of, so PKI, having ephemeral CA and DevStack helps. There are still other bits that don't work very nicely with certificates in DevStack, so I know some of the Nebula guys, I know some of Brian's guys have been working on improving how services use certificates in DevStack, so hopefully we can bring some of that together and then we will have better examples of how to do some of this stuff. Yeah, the back. I didn't talk to that point, thank you. So one of the things that we could do with the ephemeral PKI is to have it start talking to HSM to provide some of the cryptographic operations. Now, the scope of a compromise in the PKI platform depends on the type of compromise. If someone compromise it at the logic layer, then they may be able to get past a rule set and issue a bad certificate, which is the same for any PKI platform. If they were able to get a more elevated presence within the platform, then perhaps they could get to the private key or whatever we're using for the root set on there and then just like if they did that on any other PKI platform, they'd basically be able to issue certificates that'd be trusted by other things. So one of the things we can do there is we can offload a lot of that responsibility into the HSM, which makes people feel all warm and fuzzy, except all it really does is move the problem because the HSM still trusts whatever your PKI platform is, which is why people get excited about HSMs. You have to calm them down because your HSM still trusts the machine that got compromised and will perform whatever key operations you want. The only perhaps benefit you get is you get a more trustable audit stream from the HSM. So you at least know what happened when and can trust that that hasn't been tampered by whoever was tampering the PKI device. So, I mean, we have a bunch of KMIT code written that we're working on Barbican, we'll have a bunch of KMIT code for Barbican we're working on anyway. We're looking at contributing to the PYKMIT library that the APL guys have been doing. I know some of you here, thank you for that. It's great. It's not, I think, all the way it needs to be, but it's awesome that someone's taken it that far and we will help it get further. So yeah, we'll use it. Definitely, we'll make it an option, but it doesn't have to be used because this is supposed to enable PKI right the way through all types of OpenStat deployment. Cool. Any more questions? No, brilliant, thank you very, oh. So that's a very good point. I mean, one of the initial problems this was designed to fix was PKI administrators don't scale very well and they get bored and they get grumpy and they go off and do other things. Unfortunately, if this is a very successful system, say you've got a cloud of 10,000 nodes, you're probably getting at least 10,000 certificate requests, probably somewhere in the order of double that and if you're extending this for instances as well, then that could go up by a factor of 10 or something like that. I believe it will scale because it's just based on very simple web technology and the whole point of this is we're not trying to be clever. We built it using PGUN. If you can scale your OpenStack APIs, you should be able to scale this. At the end of the day, if you have scale challenges like that, you're probably prepared to put money behind the scale challenges, which means you put a TLS accelerated load balancer in front of it or something and your scale problems go away until you need to spend more money. Unfortunately, that's how scale works, right? You don't get everything for free just through software. You have to do some difficult things sometimes. So yeah, it's a problem I'd love to have. I'm sure we can solve it in a bunch of different ways. It's definitely something you need to be aware of. If you just try and deploy this on one machine and then stand it up in a massive data center, then you may have a bad day, but you'd probably have that same bad day for any other central service you want trying to deploy in that sort of scenario. Anyone? Awesome. Thank you very much. I appreciate all your time.