 Alrighty, so I guess we were a little over on the last one, so we'll go ahead and get started while a few people are still kind of trickling in here. So today I'm gonna talk about a Barbican 1.0. Barbican is the key management product that a bunch of us at Rackspace have been working on for about six or seven months now. We kind of have our 1.0 release is what we've been talking about today, which is actually being deployed at Rackspace as we speak. Hopefully we'll have a limited availability for people to kind of start playing with and kick in the tires a little bit later on this year once we get back from the summit. So I do have to apologize real fast. My co-presenter, Matt, had to stay in the States. He had some family issues to take care of, but my name's Jarrett Rehm. I'm the Cloud Security Product Manager at Rackspace. So my job is basically to build products that customers use to meet various security needs that are part of their configurations. And so I started out doing security research in school. I was security consultant for about three years at Denim Group, and so I basically went out to companies, assessed their applications, helped them build secure software, fixed problems that they had, that type of thing. And then at Rackspace I started out as a security architect, kind of securing our internal stuff before I moved on into security products. So for those of you that were at our presentation in Portland, this slide should look familiar, but basically we polled our customers about 170 of them, and these are all over the map. These are big customers, these are small customers, these are cloud customers, dedicated customers, hybrid customers, really all over the map. And we asked them, you know, out of this group of things on the right-hand side here, what do you care about? What do you expect your provider to help you do? What matters to you? What do you think makes you more secure? And kind of the big thing out of this is you can tell a data protection just blew the doors off the place, right? Nothing was even close. And there's a couple of reasons for this, right? One is that data protection is easy for somebody non-technical to get their head around. But two, obviously people care a lot about their data. One thing that I also found very interesting about this is when you talk about people getting compromised, really it's configuration and patch management that is the vast majority of the reason why people get compromised. And yet that one is way on the bottom. But data protection is obviously something that's very important. It's something that customers expect us as service providers and us as people running OpenStack Clouds and providing services to them to help them with, right? And so as we look at OpenStack, as it's matured and as it's grown and as we've added all these projects and gotten more sophisticated, a lot of requirements on these four encryption are starting to come up, right? And so encrypted data at rest for things like Swift and Glantz and Nova and Cinder and all these different types of things that are storing customer data. When you start getting into networking, we're looking at, we've seen SSL VPN work going on the design sessions this time around and so talking about how do we maintain the keys that need to be done for creating these IPsec tunnels and various other things. Obviously Keystone has various different needs for encryption and verification. And then just in the Morantis presentation just next door in the last group we were talking about being able to hit compliance goals for OpenStack Clouds, right? And one of the challenges of that being where do you store passwords? Where do you store encryption keys and those types of things? So as we were looking around, we started to see a lot of diagrams in OpenStack and a lot of blueprints kind of saying here's all the stuff we wanna do around encryption. There's this little box in the right hand corner that said KeyManager. And then we would ask and they would be like, now we'll figure that out later. And at one point we actually downloaded, the only person who had any code for it, we downloaded it was a single Python file that basically had a get and a set and through not implemented exception. I was like, great. So that's really where we started to dig in as we saw a deep need in OpenStack for this. In addition, we see a need as security people, right? Matt and I are both part of OWASP, which is the open web application security group. We were both security consultants so we saw a lot of code written by a lot of companies and in general encryption was pretty terrible in all of those places. And so we wanted to be able to create a structure that would allow not only solve the problem for OpenStack but solve the problem for people that are using OpenStack for our customers, right? And so that meant being able to make it very easy to secure things like Rails and Django and some of these other types of applications just kind of by default, right? And so that's kind of where the goals of Barbican came from and so really looking at setting files was the big one. We talked about that in the compliance session next door. If you look at default OpenStack installs tend to have passwords and keys and various other things just kind of littered in configuration files. And then of course SSH access and SSL access and all these other types of things kind of have to be handled. So Barbican as it is now and kind of going forward we wanted to support a couple of different interaction models, right? So there are different types of customers in regards to encryption, right? So you've got on the least secure side you have the checkbox encryption people, right? And so some of this is compliance driven some of this is just, oh, sure why not encrypt some stuff, right? And they expect us as the provider as the OpenStack provider to do the entire thing, right? They don't wanna manage the keys they don't wanna manage the encryption they don't wanna make those decisions they just wanna say, yeah, go ahead and encrypt it, right? And so that provides some value, right? There is certain elements in a threat model for which that can be helpful. Not super secure, right? It doesn't provide a whole lot of data protection but it does provide some. Then we have a federated keys model that allows the customer some control over the keys but still allowing the operator to do some work. And then finally the more normal or kind of super secure option of doing it all yourself, right? You do all your on-premise key management you manage the whole thing yourself and the only data that ever hits your cloud provider is already encrypted, right? And so those are kind of the most secure models and Barbican wanted to be able to support those three groups, right? They're very different people they have very different goals and we wanted to be able to support those three inside of OpenStack. So from the transparent encryption side, the way it fundamentally works is we have a consuming service that's sitting on the hosting provider side so in this case Rackspace, right? So this could be Swift, this could be Nova, this could be any OpenStack service that wants to provide some type of encryption for a customer, right? And so we have a Python client that we produce it's up on PIP now, a Python Barbican client that sits inside of that service, right? So Swift pulls it in or something like that. And at that point when Swift wants to perform an encryption operation for a customer, it says, okay, I have tenant 12345, I need the key for tenant 12345 so it reaches out to Barbican. Barbican is storing that key, in this case at Rackspace, our version of Barbican sits on top of SafeNetHSMs so the keys are actually stored there, we'll talk about how we store those keys in a couple of slides here. But it's stored at Rackspace. We return the key back to Swift, Swift does the encryption, stores the file, doesn't store the key, obviously, away it goes, right? So this is very simple, right? As a customer you could go in and say, okay, I'm gonna check the, I wanna encrypt this container in Swiftbox and don't worry about it after that, right? And Rackspace takes care of everything for you, right? That being said, right, at the end of the day, Rackspace has access to the key and I have access to the data, right? So if the NSA shows up and says, give me both of those things, we don't really have a choice, we're going to have to do that, right? Any US company or really any most companies these days will have to do that anyway. And so this doesn't provide a super amount of protection depending on what you as a corporation are trying to protect against, right? So it certainly does protect against certain types of threats, you know, somebody walking away with a hard drive from a data center or something like that. But it doesn't provide a whole lot of security, but it is very simple, right? So in the federated keys model, there's kind of two ways that we think about this. The first is using Barbican and the second is kind of something that we're playing with that might be slightly more secure. So in this model we start out with, again, we've got Swift with the Barbican client running inside of it. Customer tries to store a file in that they want to encrypt, right? So what Swift is gonna do is reach out to the Barbican instance at Rackspace and it's gonna say I need a key for this tenant and our Barbican instance is gonna say, okay, well I don't store keys for that tenant, I'm federating that tenant. So it's going to reach out to the federated instance of Barbican sitting on the customer's prem. So as a customer you would install Barbican on hardware sitting in your data center that you own, right, and you would attach that to your existing HSM infrastructure or of course you could just use a software one if you want to. But a lot of bigger customers, especially financials and stuff, they already have an HSM infrastructure, they don't want to buy a bunch of new ones. They've already done all the work to secure all that piece so they just want to install Barbican and talk to that. One of the nice pieces about that is now you don't have to expose your HSM to having a public connection or set up some weird VPN garbage. You can just put Barbican in front, it's a rest service, it's just HTTP and JSON, secure that as you would expect and then Barbican talks back to their HSM to pull the key, right? So at this point what we actually do is the consuming service will generate a public-private key pair. It's basically a transport key if you're familiar with that. It'll send the public key along with the request. Our Barbican, the Rackspace one, sends that public key along to the other Barbican. When I pull the key from the customer's HSM, I wrap it and I send it back through, right? So at that point the key actually transits my instance of Barbican, the one that I own at Rackspace and goes to Swift but as Barbican I can't unwrap that key because I don't have the private key for it, right? So if you're familiar with HSM's all that stuff, this is just straight transport key stuff, nothing magic. But basically it allows that key to be moved from the customer directly to the consuming service without the intermediate Barbican having access to it, right? So the idea is to just limit the amount of places the service provider can touch your key, right? At the end of the day, fundamentally, that key's gonna be unwrapped in Swift and it's gonna be used, right? And that's just kind of the nature of the beast. And so at that point, Swift will do its encryption or decryption and then it will toss the key away, right? And so some of this, this is a much nicer model, provides a lot more security. At the end of the day, you're still trusting the service provider a little bit, right? So as Rackspace, I have to tell you in some kind of SLA or legal document, I'm not going to store your key for more than X minutes, you're only delegating access to that key for the requested operation and then I'm gonna throw it away, right? So Rackspace doesn't store it all the time. So if you're not immediately making calls to the system and somebody shows up with a subpoena or somebody breaks into our systems, they can't get that key because I don't have it, right? So that's nice, right? But it does require customers to run, you know, HSM infrastructure or secure key infrastructure on their prem, right? Now, Barbacan will provide a software backend. We haven't done it yet, but we'll provide a simple software backend that somebody can use. So you can still store the keys on your prem, even if you don't wanna go out and spend tens of thousands of dollars on a big HSM infrastructure. So the second model for Federation that we've looked at is rather than having the Barbacan client talk to the Rackspace version of Barbacan, it could rather than that, it could just reach out directly to the customer's version to be able to pull that key back. And then you don't have to do this kind of key wrapping behavior. But it's a little bit trickier because now Swift is making these calls back. And so if you wanna be able to kind of limit it from a network standpoint and some of these other types of things, it may not be the cleanest thing on Earth. But that's kind of an idea that we're playing with. And this model can actually get a little more complicated when you start talking about, well, maybe we'll put keymip support directly into the Barbacan client, right? So rather than having the Barbacan client speak to the Rackspace version of Barbacan, and maybe you don't wanna run Barbacan on your prem, maybe you already have some HSMs and you're willing to expose those on an endpoint that I can hit, well, maybe the Python client can talk directly to whatever HSM that you have using keymip or something like that to provide kind of a, like fewer elements in the chain between key and use, right? That's kind of the goal. So right now what we're implementing is this piece right here. This is kind of what we think is probably the easiest way to do it. We're gonna see. We've got a couple of big customers that are gonna help us try it out. We'll see what people think. And if they don't like it, we'll try some new options. And then finally, of course, is the simple on-premise model. In this case, customers basically do all the work themselves. The service provider really isn't involved. So I'm not doing the encryption or decryption for you. You're doing it. That being said, customers could take Barbacan. They can install it on their prem. They can use it as a nice rest interface on top of their HSM infrastructure to both provision keys, do life cycle management, all those types of things. Make it easy for developers to be able to do that and perform those operations correctly. And then at the end of the day, when you're sending the data to whatever cloud provider you're using, it's already encrypted. I don't have the key. So even if I could give that data to somebody, doesn't make any difference, right? So that's the most work from the customer standpoint. It's the most complicated for them. Also means that they have to do all of the actual encryption locally. So there's some performance issues depending on how it is that your system's using cloud. But of course offers the most security because the key and the data are never owned by the same people, right? You can also think when we talk about federated or even on premise, you can think of using two cloud providers rather than storing it on your own premise, right? So if you want to put, you know, Barbican in HP, if they have a version of it and you're using Rackspaces cloud files, right? Then you could have those things be separate, right? So it does keep the key and the data in separate places. You know, those are both US companies. You may want to move them into different geographical regions or something like that to kind of get a little more legal security, kind of depends on what you're trying to protect against, but gives you the flexibility to kind of move around who owns the key and who owns the data. So just real quickly, I'm not gonna do the whole vagrant demo because it would take way too long to set up. But one of the pieces that we spent a good amount of time with is getting all the vagrant and chef scripts working for Barbican. So when you go home, if you want to play with the Barbican system, download two repos, vagrant up, and you'll get our entire system. So you can see, wow, that doesn't come through at all, does it? You can kind of see, there's five servers right here that it runs. It's a HA rabbit queue, an HA Postgres database, and then an API and a worker node. So relatively simple, our goal is to try to make it as easy as possible for you to kind of get up and running with the system. And so hopefully, if you guys are interested when you go home, feel free to start playing with it. We've updated all of our documentation before the summit, so hopefully it should be relatively followable. If it's not, go ahead and put a bug in on GitHub and we'll start to fix it. So this is the structure that Rackspace is using for Barbican. And I just want to talk through a couple of decisions that we made. So we're actually hosting on a physical infrastructure because this is a high security system, we wanted a little more isolation than we could get from some of the others. So we actually use a physical firewall and load balancers. After that, we have our set of API nodes. These are just relatively simple Python running on top of CentOS. Our queue, right now we're using HA rabbit. That might change, but that's probably what we're using now. And right now we're using Postgres, HA Postgres is the database. We've run into some challenges with that, with doing kind of geographical replication. So obviously at Rackspace in this particular case, you could never lose a key, right? And so we need to replicate these across multiple data centers, be able to control how that access works. A little bit challenging with doing it with a pure Postgres database. We originally chose Postgres because we wanted something with a very solid security model as opposed to a lot of the no SQL services that aren't quite there yet. So one of the things we're talking about doing for the Ice House release is at least offering an option if not completely switching to a directory. So you can use OpenLDAP or at Rackspace we use the CA directory underneath the covers in a lot of cases. That gives you a strong security model, still a very stable set of software that you can use, but makes replication significantly easier in a couple of spaces. I don't know, definitely open to opinions if you wanna talk to me afterwards, but that's kind of what we're thinking. You can see the HSM over here, we use SafeNets, but the plugin that's included right now is PKCS11. So if your HSM supports PKCS11, go to town. We've only tested on SafeNets, it should work on some of the others, but you never know until you try it. So I have been talking with the SafeNet crew, they said that they've written a KeyMip driver in Python, which is good, because that was something that we were missing, hasn't been open source yet, but I'm told that they're working on it. And so hopefully if that gets done, we'll pull that in, and then you can use PKCS11 driver, KeyMip driver, and then we supply a development driver that's totally insecure but is useful for writing code. We use Chef for all of our configuration management, although Chef scripts are also open source. Like I said, we use them for vagrant, you can use them for yourself, so you can use Chef to kind of provision your things. We also use Ansible. I don't know if anybody's familiar with that, but we use Ansible for orchestration type operations. Not sure that this might be a little specific to Rackspace, but doing things like being able to autoscale, add API nodes, do rolling upgrades, and those types of things we tend to use Ansible for, it's a little easier to do orchestrations with it as opposed to Chef. We have a worker server, which is just, I mean a metric server, which is a stats D at graphite, relatively simple stuff there, and then finally we have a set of workers. All right, so one thing to talk about is a little bit how Rackspace decided to do key storage. So right now we don't store customer's data encryption keys in the HSM, and the reason we did that is when we first kind of started looking at Barbican, most of the HSMs had a limit on the amount of keys that they could store. They all still do, but the limits have gotten a lot higher. When we first started it was a couple hundred thousand, now I think it's around a million or something like that. And the challenge is when you start talking about how you wanna encrypt elements and objects in Swift, or even virtual machines at Rackspace, you can start to see that a million keys is probably not gonna get you there. And so we wanted to make sure that we basically have an unlimited amount of keys that we could provision. So the way that we actually store keys is we have a data encryption key coming from the customer. So this is the actual key that's being used by the system. What we do is we take that in in Barbican, or we generate it if you ask us to. We send that to the hardware security module, to the HSM itself. The HSM encrypts it using a key encryption key, and those key encryption keys are unique per tenant. And so basically right now, if your HSM has a limit for the number of key stores, keys it can store, that's how many tenants your cloud can support. So a million tenants is a lot of tenants, and so we're relatively comfortable with that decision right now. And then that key encryption key that always sits inside the HSM, it never leaves it. So that key is actually completely stored inside the HSM. You can't get it out if you open it, the asset will burn away the chip or whatever. And then once that's encrypted by the HSM, that's what we store in our data store. So anything that we store has always been encrypted by something in the HSM. So if you get a hold of our database, all you're gonna get is encrypted data encryption keys. You're not gonna get any love from that. So you would actually have to crack the HSM to get the key encryption keys out and use those to decrypt the elements in the database. And of course, each element of the database is tied back to a tenant which has a different key encryption key. So that way a single tenant could have hundreds of thousands of data encryption keys depending on what they're doing. Those are all encrypted or decrypted with a single key encryption key right now. Does that make sense? All right, so the BarberCAN API right now is relatively simple. We support symmetric keys and we support you just doing general crud. So if you wanna generate the keys yourself and pass them to us, that's perfectly fine. Or you can ask us to create them for you. So the secrets resource is the kind of first resource that we support. This is basically just crud on top of a secret. So you can see we're posting a secret here. We support exploration for all keys, not just asymmetric. So if you have, this is a very nice feature if you're doing compliance where you're required to rotate keys every once in a while. BarberCAN will actually, you can just say, okay, by this date, it must be rotated and so I will never serve this key again. And at that point the API will just log it if someone tries to get access to that key but it won't actually give the key back. And you can see, obviously we're storing all the information about the particular key that we care about. In this case, they're actually passing us the key that's the payload. And then we have a content type that allows you to specify kind of what format keys are in. And then here's just the get, kind of pulls you back the key information itself. So relatively simple stuff if you looked at rest, not really complicated. Our Python BarberCAN client also provides a keep command line client, which allows you to just interact with the API via the command line if you'd like to do that. So that can be helpful when you're doing system administration tasks, those types of things. So the second resource that we support is what we call orders. So an order is how you ask us to create a key for you. And we've had some conversations with people on the mailing list about this. It's been kind of confusing, but the reason we chose this semantics or this structure is that we wanna support not just AES keys and symmetric keys, but we wanna support asymmetric keys and SSH keys and all the various types of encryption products that you can ask us to generate, right? And so in this case, this is an asynchronous call. You basically put an order in and then you pull that order. And when the order is complete, it'll actually give you a secrets ref right here. And that will tell you that the secret that you asked for has been generated. Now for an AES key, that's basically gonna be instantaneous, right? All we're gonna do is reach out to the HSM, have the HSM generate the key, store it, and then we'll return you back via the secrets ref. So it's relatively quick. In the asymmetric case, it gets a little bit fuzzier, right? So if you're asking us to provision an SSL certificate based on a public CA like Symantec or Verisign or Thought or something like that, we're gonna package all that stuff up and we're gonna send that request to them. But if you're asking for an EV cert, it may take days for that to be provisioned because Verisign may reach out to you to verify all of your information, all that kind of stuff. So that's why we went with the asymmetric model. So you can pull that as much as you want and you will get updates on exactly what's going on in the back end, right? But it allows us to kind of separate out the fact that some of those keys will take up to weeks, possibly to be provisioned depending on how things are going. So one of the big things we'll be working on in the next, for the next release, will be SSL support. I don't know if we're gonna get it in. We also are gonna be working pretty hard on transparent disk encryption for Nova. And so hopefully those two will get in for the next release, but we'll see. But Rackspace Provision certs through the Symantec group. So that'll be Verisign, Thought, Geotrust and Rapid. And so you'll be able to provision certs off of those through BarberCan and then we would like to also offer just an internal CA. So if you wanna run your own CA, be able to provision your own certs internally, have a nice API on top of it to manage all that. We'd like to do that as well. We'll see how far we get on that one. But those are kind of coming up. So I don't know how well this is gonna work with all of this. Try to make this a little bigger. Can you guys see that at all or no? Kind of maybe? The wrong aspect ratio, but. All right, so I wanted to give a quick example of how we might integrate BarberCan into an open stack service, right? And so what we chose was Swift. And so we decided to build basically an encrypting proxy for Swift. So the way it works is we have our own proxy that sits out in front. And when you hit the service catalog, when you hit Keystone and Authenticate, you actually get two endpoints for Swift. You get the regular Swift endpoint and then you get the encrypted Swift endpoint. So this is an example of transparent encryption. So as a customer, if you post the regular Swift endpoint, it's not encrypted. If you post the encrypted Swift endpoint, it is. But as far as you care, that's the only change that you have to make. And so the proxy that we used is based on a project called Pyrox, which is a Python, it's an open source Python proxy that we use. So I'll go ahead and start this up. And then, this is gonna get super small again. Hopefully you can kind of see that. So basically we have, I have a file here. So just HelloOpenStack in Hong Kong, right? Just a simple text file. And if I go and look at my cloud files account right now, there's nothing in here. This is super slow thanks to latency of going back and hitting the US. So I basically have a curl command here that I'm gonna use. Now the fun thing about this is because it's a proxy, it speaks exactly what Swift speaks. So if you have CyberDoc or any other client that as long as it conforms to the Swift API, it will work with the encrypted proxy. There is no difference. And so this curl command is the exact same curl command that you would send to a regular endpoint. How easily you can see this, but we're using a chunk transport encoding. We pass in our off token, which is hidden because I don't trust you people. And then we're just gonna pass in our file. So this is going back and hitting DFW. So hopefully it won't kick me out. How did my off token expire? See now you're gonna see my off token. You guys are, hopefully I won't get blocked. This wireless blocks weird points. I don't know if you guys have noticed, but. All right, so nobody write this down. Cause you know, let's try this again. Hey, look at that. All right, so if we go back to our proxy, you can basically see what's going on here. Is it's pulling in the content of the file, determining whether it needs to be padded or not. Applying the padding as needed, encrypting it. Right now we're still using just AS-286, right? Cause GCM wasn't ready yet when we were playing with this. And so now if I go over into my cloud files account, I'll have a random file that was uploaded. Boot it off. So one thing that we did choose is we chose not to encrypt container names and file names. And the reason we chose to do that at least for the demo is that we didn't wanna break all the other tools that are used to using Swift, right? So if we do that and you log in with Cyberduck, you're just gonna get like all this garbage. In reality, if you wanted to go a little bit farther and kind of add that extra protection, it wouldn't be particularly difficult. At that point, you may actually be better having a single block of encrypted data and then just letting us manage the files inside of that block. So you're not exposing how much size or how many files or any of those types of things. It just means that Swift becomes a little less useful if you're not using it through the actual encrypted piece. So you can see now I've got my file that's been uploaded. I'm gonna go ahead and download it here. So now if I pull this up, how useful you can see that. But down here basically you can see if I cat that file, it's just garbage, right? So the file that's sitting in Swift is encrypted, right? And when I go to the website and I download it directly, I just get encrypted data because I didn't go through the proxy, right? So now if I go back, and again, I just grab a different curl file to pull the actual file down itself, right? So nothing special here. Again, we're just pulling that file down, passing in the off token, exactly what you would do if you were talking to Swift normally, right? And then it decrypts it for you and passes it back, right? So a relatively simple example, there's not too much magic here. And like this is definitely a POC. We actually spent some time with the Swift guys and talked to them a little bit about how this would happen in real life. So we're working with IBM. They've got apparently some code that they've written to actually pull this functionality into the actual proxy server inside of Swift. And so Swift would kind of own it. But basically, we just want to give an example on how hard this was to do. I think it took us about, what, about a week? Maybe two or so to get everything kind of up and running. So it's not particularly difficult to do. Barbecuend does all the key management on the back end, no particular magic there. So it is relatively simple to pull into the rest of the services. So as we get Barbecuend up and as people start getting comfortable with it, we'll be working a lot with the other projects inside of OpenStack to help them make use of it if they have needs for encryption. So the proxy that I just showed is what we call portcullis. It's kind of hard to read, but it's just an HTTP-reversed proxy. Right now we do do a keeper file. And originally we tried to do a keeper container. The challenge is that Swift has semantics that allows you to copy a file between containers. And so things get really messy at that point because I would have to pull the entire file down to the proxy and then send it back up, which if somebody's putting 20 gig files in Swift, that gets really grumpy for me. And so we just went with a keeper file which allows us to move the file around, doesn't cause any particular headaches with Swift. As I said, we kept file name and container names. So one thing that we did add to the proxy that's a little bit different is that we added a verify resource. So what we wanted to be able to do was use GCM support or use HMAX support on top of the file, right? The problem is that when I get the file, I'm decrypting it, I'm just streaming it down to you as I decrypt it, right? So I don't know whether the file's been verified until I stream you the last byte, but at that point you already have the entire file. So there's no way for me to tell you a priori, like, okay, well that didn't vary without me pulling the whole file down decrypting it, verifying it, and then send it to you, which I'm not gonna do, right? So we actually added a separate resource called slash verify. And so basically when you download the file, you get an additional GUID that basically kind of identifies this particular download. And then once the file's complete, you can make another request to the API and say, hey, did the HMAX pass or not? And we'll tell you yes or no and we keep that around for 24 hours or something like that. And so obviously Swift already provides kind of some MD5 stuff to kind of offer some level of protection. But if you want the full cryptographic HMAX, then you could certainly do that. And then as I said before, Pyrox actually performs the flow control so we're doing all streaming in the API. So no data actually backs up on the proxy itself, right? So this is my fun picture that I saw that I found for Icehouse. So what we're talking about working on Next, the big thing we'll be working on for the next six weeks is transparent encryption in Nova. And so kind of allowing customers to be able to build servers, especially Windows is kind of the one we're working on at the moment, that are fully transparently disconcripted with all the keys managed on customer prem using Barbeca to kind of facilitate that entire piece. So that federated model we were talking about before. Then we're gonna be working on KeyMip. So we talked a little bit about that. Hopefully if that KeyMip library gets open sourced, we'll use that. If not, we'll probably have to write one at some point. SSL and TLS we talked about, Federation we talked about. And lastly, we'll be working with a lot of different projects to help them integrate. And so our assumption is that we're gonna be writing a lot of that code ourselves but we'll kind of see how it goes. So hopefully everything is all set up for you guys to play with Barbecan now. So Python Barbecan client is up in Pip. You can install it right now. Or up in PyPy, you can install it right now. Source code and documentation is all out on GitHub. The integration environment, so we have an environment stood up at Rackspace. It's just running on our cloud and it is not secure in the slightest so do not use it for production. It doesn't run on top of HSMs. It's just using our dev plugin. But if you wanna write some code and hit the API, play with it, see how it works, then it's up there, you can hammer away at it and tell us what you think. You can see some examples here of what the actual client looks like, all that kind of stuff. So definitely spend some time, kick the tires, let us know what you think. We would love to have people look over stuff. Anytime you write crypto code, the more eyes, the better. So if you guys wanna take a look, let us know. I think that's pretty much it. So we hang out at OpenStack Cloudkeep on FreeNode. It's github.com slash cloudkeep. And then of course we have our list that no one uses because everybody uses IRC. So the dev's all hanging out in there so if you have any questions, feel free to reach out with them. Otherwise, does anybody have any questions now? Anything we wanna? So the question was, why did we choose to write our own proxy? Well, we already had it and so we just kind of used something that we had. This is not something that I wanna maintain. I wanted to see, okay, we built this thing. How hard is it to get it in Swift, right? Because I can't go to the Swift guys and be like, hey, you should totally integrate all this. It's super easy if I've never done it. And so that's why we chose it, just to kind of do a proof of concept. The long-term goal would be exactly that. So we're talking with IBM, which I guess is said that they've written some middleware basically that will fit inside Swift itself. We'll probably maintain that middleware shim along with them if they want to. And so then you'll just, you'll deploy Swift, you install that middleware and configure it and then you're good to go. They can, or they can ask us to generate it. So the question was, does the customer generate the key or do we generate the key? If you ask us to do it through the orders resource, then we generate it off the HSM, which has hardware randomness and all that kind of fun stuff. If you wanna generate it locally and send it to us or you have an existing key that you wanna have us store, then we'll do that to you. So fundamentally, we store anything. We don't really care whether it's a key or not. So people put whole configuration files in Barbequan. We don't care. You just tell us that this is UTF-8 or this is plain text or this is binary and that's it. Now if you ask us to generate a key or we know what that key type is, we can put some additional functionality on top of it. But you as a customer can decide whether you wanna generate them locally or you wanna ask us to do it. Yeah, so the question was how do we authenticate when someone asks for a key? Right now we use Keystone, which is fine. But at the end of the day, somebody goes, well then don't you just have a Keystone credential sitting on your box? Yeah, right. And so we've talked a little bit about using some of the Keystone sub-account type of functionality and being able to basically have individual servers have their own individual accounts. So if anybody was there in Portland, we talked about an agent that we've looked at doing called Postern. We did a POC for it, it worked pretty well. It's on our list of things to work on. Those would have individual key pair generation and some of that type of stuff to kind of try to lock that down a little bit. Fundamentally at the end of the day there'll be some credential sitting on your box that allows you access to Barbican. But part of Barbican is auditing all of that piece so you know who's touching your stuff. And then we like to implement a policy framework where you can say the server will decide whether or not that particular request, even though it's authenticated from Keystone's point of view is something that it's willing to accept. And so we'll get as far as we can on that. We'll see how well we do, so. Yeah. The model is multi, the model is multi-tenant, multi-projects. So one domain and multi-sources in Barbican, for example, Secret, is not sync up with the Keystone model right now. So do you have any plan to sync up? Yeah, so part of that is just a side effect of the rack space has been a little slow to adopt V3. So we'll definitely modify it. So basically Keystone decides what a tenant is or what a project is in this case. Yeah. And so we tie secrets to projects, right? And then Keystone can just tell us whether these are part of domains or which accounts have access to that particular piece. We don't care about that. That's Keystone's problem. And then we'll just provide access to those things. No, but at some point of time, it's a real problem. So for example, you want to have Secret per project basis, right? So that we cannot achieve right now. So right now, the scoping of resources is only for per tenant, right? Right, so that's what we've got right now. I'd like to get more granular on that. That's kind of the policy framework and start to kind of subdivide those a little bit. So you can say, well, this set of machines for this grouping of machines from Nova can have access to it, but not this other one, even though they're in the same project. So I think we could certainly get there. We've talked about that in relation to the agent, but the policy framework can kind of be extended to do that. So this is what we find, it's basically an MVP, right? It's the first thing that we launched that has value. And we'll get more specific to it as people kind of ask for those particular types of things. And another question I have. So since the use cases you mentioned, like cross-cloud client and Barbican communication. So do you guys have any thought on the confidentiality and integrity aspect of the message on the wire? I don't think I caught all of that because you put the mic in front of your face. So my question is, since the use cases, like Barbican will work over the cloud, sorry, cross-cloud rate, so is there any thought on the confidentiality and integrity aspect of the message which is going over the way? Yeah, so right now we basically are relying on TLS. Obviously you can layer more things on top of that, right? So we can encrypt inside of it, we can do Key Exchange. So in the federated model, when we're talking about federating between two Barbacans, we'll probably do Key Exchange in addition to TLS. And you could, of course, a lot of customers, they're not gonna even expose Barbican to the public internet, right? They still wanna have some kind of VPN connection between the two to further authenticate it. So we're gonna kind of see how that goes. But yeah, we can obviously layer multiple different pieces on top of that to provide kind of confidential integrity. Yeah, because your transport level security is not going to handle those part, right? Yeah, yeah, I mean TLS gives you some nice features, but it's not the be all end all, right? So we don't have to go more. And is there any blueprint on that area so far? No, we haven't written anything up yet, but certainly something that we've talked about. So I'm happy to talk to more people about it. Yeah, it's been an interesting conversation. So when we talk to Swift or something like that, so I think the answer is yes, we will have to do that just for performance reasons. Obviously the more that you cash the key, then now it's sitting on your provider's infrastructure for some period of time. Even if it's encrypted, there's still a danger that it could be exposed. And so I don't know if we wanna try to provide some kind of policy around that and said, okay, well, this key you're allowed to cash for X amount of time, but this one you're not. Or you just wanna let the service decide or the service provider can just tell you kind of in general, as rack space, we will only store your key for less than five minutes. So I don't know, I mean, we're gonna have to see part of it is kind of getting it encrypted, done in Swift, getting it deployed at rack, seeing how people are using it and seeing if caching matters. It may be that for encrypted use cases, the caching doesn't help too much. My intuition, and I'm guessing it's yours, is that we're gonna need it. And so we kind of looked a little bit about how to do that. It's just trading off kind of security for performance at that point, yeah. Why do you think the federated model is really better or more secure than the one where you store the key in the cloud? Because if somebody comes or breaches your server or if the FBA comes to the data center, they can impersonate the Python client, right? So at that point, they can ask me for a key and I'm going to give it to them because I don't know, it's FBI in the rack space. So really it doesn't matter where the key is as long as I can access it, right? Yeah, certainly. So I think it's more secure in that as a customer you have control of the federated access. So one of the things customers have asked us a lot, right? And the rack space put a lot of effort into it said, okay, what happens when I delete a server? Where does my database go? Where does my data that was sitting on that server go? And so we have to write up all this documentation that says, well, rack space takes that image out and we wipe it a couple of times before we put it back into rotation, right? So if you're a customer and you've got an encrypted volume in Nova and you delete that server and you delete that key, you don't care what happens to that data, right? It's gone, right? So you're absolutely right that if everything is still hooked up and everything's live and somebody manages to penetrate the infrastructure in a couple of places and make that request, that could be a problem. The other thing that we're doing as part of transparent disk encryption is building a verify resource that basically when you pop a server and you wanna do transparent disk encryption in Nova, as that server comes up, it needs to make a request for the key, right? Well, we don't just wanna return it because as you said, like it's just a server coming up. Maybe that server's not even sitting on rack space cloud, right? That would be very bad, right? And so we're gonna have that come up, talk to Barbican, Barbican is gonna use its privileged access to talk to the management APIs of all of the other open stack services and try to verify as much as possible that like, yes, this was actually a requested boot, this was requested by a key and keystone that hasn't been revoked, like all those types of things. So you're absolutely right, like, we can try to layer as much as we can around that, but it's still all linked together, right? And so unless the customer has something on their side that blocks it or something like that, we'll serve the key, right? And so the question is, how much can we put in place to make that as hard as possible? We'll never be able to make it impossible. That makes sense. And then I have one suggestion with that HMAC verification. You might look into some encoding so that you kind of chunk it and then sign every 64K or something. Yeah, yeah, doing kind of a morbid B-tree model or something along that. So we definitely talked about that and kind of playing around with it. So this POC was like, okay, let's see what we can get working on Swift in two weeks. You know, and so like I think there's definitely a lot of smarter things that we can do and that's what we're talking with IBM about. The question is like, do we wanna support the ability to kind of touch an encrypted file with an unencrypted endpoint? If you don't, then you can mess with it and do whatever you want. If you do, then you gotta be really careful not to break the semantics and you're giving up security and performance to get there. Yeah, yes? Looking forward, you talked a lot about file encryption and transparency and so on. Looking forward, do you have any plan to support like X519 certificate, client certificate authentication? I mean, more on the authentication side of it, more than file encryption. More like PKI, is there any, do you have any plan on this? Yes, so we'll definitely support, obviously provisioning SSL and some of those types of things, kind of baby steps in that direction. I think as to kind of authentication and identity, we're having a lot more conversations with Dolph and the Keystone guys about who owns what, right? And I think, you know, identity's kind of from, you know, ideas always been we should own anything authentication related, totally agree, right? And our kind of thing is we would like to own anything that's kind of encryption key and key management related, including key generation, but I don't wanna actually own the linking of a user to a token and the verification piece, that really feels like Keystone, but definitely a conversation we're having at the summit this time around, talking a little about how we wanna do that, what use cases Keystone needs from us, like what keys they wanna generate, X509s, those types of things, how we wanna chain those off of a root cert, right? So like, I don't think that Keystone should have a root cert that it's issuing stuff off of. I think it should use us to do that. That's my opinion. We'll talk with them about that, but definitely it's something we're talking about, how we wanna make that work. We're happy to do it. I think we wanna take that on. We just need to make sure that everything, everybody's comfortable and that everybody's happy with the different piece of code that they're gonna have to live with. Sure, thanks. We outta time? All right, looks like I'm being waved off. Thank you very much. We'll be around.