 Hey, look at that, it works. You guys can come on in if you want. There's still chairs over here. All right, so we are on a relatively tight timeline here. So I guess we'll go ahead and get started. So, see if my clicker works here. So today we're going to talk a little bit about Secret as a Service, which is kind of the outgrown piece at Rackspace of kind of a general key management for the open cloud that we've been working on. And so we kind of call it Secret as a Service because it expands a little bit or more than what you would typically consider as part of your typical or normal key management system. And we'll kind of talk a little bit about that. But just to get started, my name's Jared Rehm. I'm the Cloud Security Product Manager at Rackspace. So my job is to build products that customers use to meet various security needs as part of their configurations. I'm Matt Tesaro. I was an OWASP board member. I'm still very active in OWASP. I've got a couple OWASP projects I keep up with. I've been a Racker since 11, I guess, October of 2011? 2011? I work in the product security group. So I'm sort of here to bust up the cloud that we have implemented, hopefully. So I'm a hack of the rat guy. My background is in development, but I also have a strong pen testing background as well. Yeah, so this kind of idea, a lot of the genesis was Matt and I doing some research for OpenStack and kind of figuring out how it is that we make encryption and make key management better for the average developer. How do we make it so that there's actually a good answer for how this should be done? And this kind of combined with some work we were doing inside Rackspace to kind of redo an SSL certificate service that we offered. And this all came together to kind of become the secret as a service offering that we're talking about. So one of the first things I wanted to get started with is just kind of this idea of how important is this, right? So do people actually care? So we reached out to our customer base at Rackspace and we got 100 responses from our customers that were all over the map, right? So these are gonna be public cloud, private cloud, hybrid customers, dedicated customers, big customers, small customers, really kind of all over the map. Trying to get a pretty good cross-section of people that are using our services. And we asked them a couple of questions. And so at Rackspace when we talk about security, and this is in general, it's always very difficult because there's just so many different tools and ideas and groups and they all have different meanings and everybody means something different when they talk about security. So internally we came up with what we call the security taxonomy. And it's basically just these six buckets that allow us to try to talk about security in a general sense and try to get a little more specific with customers when they talk about what they want to be secure. And so you can see the buckets are relatively straightforward. Data protection, endpoint and network protection. This is kind of firewalls and antivirus and those types of things. Identity and access control. Our product manager for identity is right over there. Application security, vulnerability, incident management, configure and patch management. So these are relatively straightforward. When we send the survey out, we give a little kind of two sentence or three sentence description of each one for customers and we ask them to write them how importantly believe these are for a cloud provider, for a hoster to provide for you. And you can see that 57%, well 60% of the customers said that data protection was their number one choice. There's lots of reasons why that's the case, but this is clearly something that most customers, technical or not technical, we just asked our customers. So some of these are gonna be developers, some of these are gonna be CIO types. Some of these might even be the billing contacts that happen to be the ones that got the survey. And so it's really kind of all over the place, but data protection is absolutely the number one thing that they ask us about. And in the same survey, we asked about risk. And so a lot of times when you talk to security people they always like to talk about risk. And so in this case, we'd see most of our customers don't believe that they're risk takers, which is fair, most of our customers are businesses. And so they wanna stay in business. But if you look at their cloud strategy for sensitive data, the vast majority of them don't trust us to store their data. And this is really general cloud providers, not just RAC, but so they're either considering allowing data or they don't is two thirds of the people. So the number one thing that people care about, they don't believe that we're doing well. And so from a key management standpoint, from a security standpoint, this matters a lot to me. This is a good problem to solve. And unfortunately it is relatively difficult to solve, but we're moving that direction. So we've already kind of seen some blueprints from Cinder and Swift. So cloud files and cloud block storage for including kind of transparent encryption as part of their offerings. This is really great stuff, right? And so they're already, all the open stock products are already kind of starting this work. And you can see, I don't know if my laser's gonna work here. You can see down here on the right, key management appliance, right? And they actually went so far as to say keymip, which is cool. But a lot of the blueprints that we've seen, each system is kind of solving key management independently. And unfortunately key management is one of those things that it's really quite complicated, has a lot of compliance and regulatory requirements that are wrapped around that. So if we solve these all independently for every single open stock product, we're gonna buy ourselves a lot of pain. And so as Matt and I were kind of looking through this and we were starting to do planning for the system, we kind of, we're thinking about what the different open stock products want out of a key management service, right? And so we wanna support multiple protocols, right? So in this particular case, we're talking about keymip, but there's gonna be lots of people who don't wanna deal with keymip. It's quite a complicated protocol. We wanna be able to support integration with Keystone, right? So a lot of encryption protocols, or encryption pieces should link with your identity somehow, right? So there's obviously a strong role with Keystone. We need to support multi-tenancy, which is one of the bigger problems that you see in the current encryption protocol space, where they're really all designed for the fact that, well, this is a device that sits behind your firewall that you own, right? And there might be some divisions where you can say, well, this department versus this department, but true multi-tenancy doesn't really exist. And there are people who are solving this problem. I mean, there are vendors who are solving it, but it's still something that we wanna kind of hit. And then one of our major goals is to make sure we tick the auditing of compliance boxes. When I sit with bigger customers, these are deal breakers, right? If you cannot meet their audit and compliance requirements, you don't get to talk about anything else. That is the end of the ballgame, right? And so we have to solve this. We have to do it well. And if we do it once and everybody can just use it, then it's much, much easier for the community to kind of make use of this and solve those problems without every single product having to worry about all of this garbage. You really don't have to read all these compliance docs. They're not a lot of fun. And then of course, at Rackspace, obviously we believe strongly in the community. We believe being free and open source. There are other, like I said, other vendors that are solving these problems, but none of them are out there where anybody can be part of the community. They all tend to be quite expensive and they all tend to kind of require some kind of lock in. And so we really wanted to solve something or create something that was out there for anybody to use. And so if we look at the future plans, we already know that Cinder and Swift have kind of done this, but if you think about pretty much every open-stack product has some need for encryption. Whether it's SSH keys, whether it's SSL certificates, whether it's AES keys for encryption of data at rest, everybody has some need for this. And so if we look at databases of service or quantum for some of the networking-type pieces, NOVA obviously might care for SSH keys and encrypted file systems, everybody has a need for this. We may not all have gotten there yet. It may not be on the top of our backlogs yet, but we all do care about being able to provide these types of services. And lastly, we wanted to make sure that while we were building a product that worked well for open-stack, we wanted that product to work well for customers. And this is where the research really started from Matt and I is a lot of times as security people, we go, hey, you've got your encryption key just sitting on the disk right here. That's not good, you shouldn't do that. And the developer goes, great, well, what should I do instead? And then we go, look over there. You shouldn't do that. Don't do that. Ask no follow-up questions, right? Best practice says don't do that. Yes, yes, the document says don't do that. I don't see what the problem is. This is obviously not good. We needed a stronger story for developers on how to do this the right way that was easy and how we supported a lot of the needs. So this brought a couple of extra requirements that we were talking about. So one is multi-cloud interoperability, right? So customers very routinely wanna be able to store their keys separate from their data, right? Or even store their keys on-prem, right? And so the system that we've kind of designed allows a customer to run the key management piece in their prem if they want, and then allow us to federate access to that to a Raxby's cloud. Or they can run their key management in HP and federated with us or Piston or any of the other kind of guys that are out there, right? And that's the power of being kind of the open source free option that we all don't have to go pay these vendors $50,000 a month to be able to do these types of things. We wanted to really support easy integration stories both for the OpenStack teams, but really for customers, right? There are a lot of applications that just are not gonna have a big dev pass, right? Nobody's gonna go back and change a lot of code to make this work, so it needed to work pretty much kind of out of the box. We wanted to be centrally managed. Key management's one of those things because it has audit compliance requirements that it is very complicated. And so by kind of allowing central management, it allows a company to kind of own their own destiny also gives us a service opportunity, right? So for cloud operators and cloud providers, it allows us to help our customers do these things correctly, even when they may not have the expertise to do it themselves. And then finally, like I said, a lot of times when you deal with things that are really compliance related, you spend all your time checking the compliance boxes and you forget to actually make it any more secure than it used to be. And so we really wanted to make sure that we designed a system that had kind of same defaults that provided a higher level of security and also does the compliance. That's important, right? But we wanted to make sure that really you got the security out of it and we weren't just doing this because some law said we had to. So we know what we want, right? We know that customers want this. We know that it's a problem, right? We've, you know, people have already started, so why don't we just go do it? Unfortunately, it's not quite that easy. Yeah, well, thank you. So here's an example of some code we pulled out of somewhere. And this is one of the problems. We have hard-coded algorithms. We have a static salt. We have a key that is not very, that's no entropy, really, of any at all. It's a nice way to put it. Nice way to put it. I'm trying to be nice here. But so this is the world in which our developers are living, right? It's hard to get this right. So we wanted to make sure that when we did this, we did it right and we provided it as a service so that it would make it easier. So when we looked at this, Matt and I picked out six problems. Yeah. Anybody see any more than that? Can I find them all? So, MD5 and triple-desk. Oh yeah, that was the other one. I didn't mention that one. Not algorithms that you're supposed to use anymore. They're also hard-coded, which is also not something you're supposed to do. Your salt's hard-coded, your key's hard-coded. You've got encryption and decryption methods that deal with bytes, but don't actually mention encodings. Also a problem. And I'm forgetting the six, but there were six. There were six. At least that we found. There were probably more. There were probably more. I actually looked at the implementation of encrypt and decrypt. Oh, there's not an IV. Although I guess you maybe don't need one with... Maybe you don't need one for the way they were doing it. Yeah, I don't want to trouble you. Anyway, there we go. And then for example, if you're a Rubyist and you're doing Rails, Rails has this secret token that it uses to do CSERF tokens and your session values. And by default, if you're using GitHub, that guy just gets checked in, right? And then GitHub adds search and then we can find all your keys and hijack all your sessions and all that fun stuff, right? GitHub search has become the greatest hacker resource on earth. Yeah, it's amazing. People check in their home directories and they've got their private SSL keys in there and all kinds of fun stuff. Oops. And then even if you do want to do it right and you do some searching, we did this one was what, AES encryption Ruby, you can do this for whatever, pick your language. Look at the top hits. 2007, that's ancient. I mean, crypto-wise. 2011, we're getting a little bit better. But if you look at these examples, they have hard-coded keys. They have hard-coded algorithms. They have bad Cypher types. They don't make a distinction or even mention the difference between ECB and CBC. A lot of them are quite old and then they have awful habits like null or hard-coded IVs, right? So even if you have a developer who's trying to do the right thing, if they follow the guidance that they get from the web, it's probably wrong. Yes, so literally every link on the first page in Google, all of them have a major security vulnerability. So we walked through them one day. It was just sad. If you follow any of that code, then you're screwed. Yeah. Yeah, you're certainly not getting good advice. So what we came up with with this kind of this idea of how do we solve this for developers and how do we solve this for OpenStack and OpenStack customers? We came up with a project that we call CloudKeep. It's out on GitHub right now. So we're a couple of sprints in, so it's still pretty green, but we're trying to get a lot of documentation out there. And I would invite all of you if you're interested to please come jump on the mailing list, put a shoes into GitHub, pull requests accepted, free and open source. So it is castle-themed because I'm just that level of nerd. So we've got four pieces that actually make up kind of the CloudKeep product. Barbecuean is the major piece. That's the REST API that provides all of the major services in CloudKeep. So that's provisioning of secrets, secret storage, life-cycle management, auditing and reporting, all the compliance stuff. We spent a long time trying to make sure that we used technologies that were kind of friendly to OpenStack. So we're using Falcon, which is a small wrapper around Nano Whiskey that was actually a, it's open source now I think. Yeah. So that one's been around. We use Oslo and all the other things. We spent a lot of time trying to match the kind of standard OpenStack way of doing things in hopes that if people find this valuable that maybe, I don't know if we're supposed to become an incubated project or not, but certainly we want to make the project useful to OpenStack developers. So we also, in kind of the goal of making this easier for developers to make use of these keying services and do them correctly, we created an agent called Posterd. And Posterd basically, agent that runs on the box and actually provides access to the keys through a Fuse file system. And so it makes, from the application's point of view, it's reading the key from a file on disk. Now there's no file and there's no disk, right? But to the application's point of view, that's how it is, right? And we'll show a little bit of that. That's what we're gonna demo a little bit later. So we also have Palisade, which is a web UI. One of the goals for this product is to be able to run it independent of OpenStack, independent of Rackspace, right? If you want to run it entirely on your own network and then just federate it to us, that's a use case that we want to support. And so we wanted to have a web API that was, or a web UI that was separate from Horizon and separate from some of the other things so the customers could kind of run it themselves. And then finally, we have a command line client for Key, very similar to Python Nova Client or something like that. So if you need to configure it from the command line on the server or something like that, you can use that. So we've kind of gone through a couple of the ways that we got to this point, right? And we kind of tried to boil all that down to a couple of design principles when we started out with this, right? So we wanted a central key store capable of distributing King material to ephemeral cloud instances. One of the major problems that you kind of see in the encryption world is it's really more designed to work the older hardware style of things. They don't expect servers to come up and down all the time. They don't expect things in the public cloud and they expect certain types of network typologies that aren't necessarily the same thing that you would see in a public cloud employment. So we wanted something that really kind of hit the cloud first, made sure that those use cases were solid, but we also want to make it useful for the older versions. There's no reason why the system won't work on a purely dedicated on-prem environment, right? You can use it for that if you'd like. We wanted to support reasonable compliance regimes, the reporting and audibility. I put reasonable in there so I'm not tied to all of them because I'm sure there's one that's gonna make me wanna cry. Well, more than I do. More than I do. Alrighty. Number three, so application adoption should be minimal and non-existent. This is really why we did the agent. The agent is entirely optional. You don't have to use it. It's just a way that if you have an application that is already reading a key from disk, you can do this all without having to make any changes to your code. Yeah, in an ideal greenfield world, you just write against the API. You could if you wanted to. But if you have legacy things that you have to accommodate, that's kind of where the agent comes into place. We wanted to build a community and ecosystem by being open source and extensible. So like I said, there are some teams that are out solving this type of problem. Gazang has done a really good job around kind of building this kind of key management idea and a couple of other things. Unfortunately, we really believe that this should be an open source product that we can all write to a common API that everybody can use. So I would love if our API gets implemented by a lot of these vendors because they do a lot of stuff that we're not gonna do, right? Gazang maintains EncryptFS, which is one of the major kind of encrypted file systems for Linux, right? That is good stuff and they should keep doing that. And so they have a good differentiator already. So I don't really see us competing with a lot of these guys so much as kind of helping to standardize and then also providing an offering for people that don't have the money to be able to pay for some of these more expensive vendors. I guess that we want to improve security through same defaults. An out of band communication mechanism was important for us because it allows the agent to respond to actions on the server that are hinky for lack of a better word. Something is a miss. Something is a miss, yes. And so we'll show that in the demo. And lastly, we wanted to use OpenStack tools, processes and libraries to make sure that it fit into the ecosystem. So whether we're an official part of OpenStack yet or not, I don't know. I think I would like that, but we'll have to see. Even if we're not though, we want it to work well with the OpenStack community. So part of making it as flexible as we've talked about is to make sure that most of our functionalities implemented through plugins. So we support a bunch of different extension ports right now, and I'm sure there'll be more. So we support internal and external certificate authorities. So Rackspace uses the Symantec set of CAs. So that's thought, verisign, geo, trust and rapid. And then we'll also support internal CAs so that we'd be able to provision SSL certificates both that will publicly validate and then off of an internal CA if you want to support one of those. And these all include lifecycle management, notifications for when it needs to be re-up, automatic renewals, those types of things. Datastore backends, we just use SQL alchemy, so you can put it on pretty much whatever database SQL alchemy supports. We support automatic provisioning, so one of the goals of the system was kind of make things easier for customers so that it's a little bit, it's easier to do these things the right way. And so if you provision an SSL certificate, you can tell us, hey, I want you to install that on the load balancer that I already have and we'll get the cert for you and then we'll reach out and do that installation. We really like to beef up that area kind of going forward. And finally, like one of the kind of sticky wickets for dealing with these types of encryption keys is whether or not you use HSMs or hardware security modules. So these are specialty pieces of hardware that are designed to host keys, not really designed to work well in cloud. That's changing, but most of this stuff is still pretty ancient. There aren't a lot of vendors in the HSMs space either which makes things a little bit challenging. So we actually do support HSMs. You'll be able to use them if you want to, but they will not be required because HSMs tend to be quite expensive so that would price a lot of our customers out of the market who might want to run it. Yep, yep. And so the agent we talked a little bit about gives us legacy application integration which was a big piece for us. We also allow customers to specify policies that are tied to secrets. And these policies allow a customer to specify how many times a key can be used under certain circumstances and we'll kind of walk through how this works. And the agent enforces a lot of those policies. Right, so in a lot of cases it's a way to provide the key material to your application when it needs it but never when it doesn't. Kind of limit the attack surface that we have for your secrets. It's Keystone integrated and we talked about it supports kind of the out of band. Oh, band. Communication. So just to give you an example of what a policy looks like. This is still pretty green, but you can see it's got basic stuff, name and ID and one of the major ones is this max key accesses piece that you see over here. So this allows you to specify how many times a key can be accessed per kind of operation that you care about. So that might be per reboot. That might be per process restart. Yeah. So that's one of those. Minimum it's available after reboot. So for example, if you know that your application reboots when it comes up, it reads the key, decrypts its configuration file and then does its stuff. Right, then you know that that's gonna happen in the first five or 10 minutes probably even less than that after your machine boots. And then that application should never need that key ever again. Right, and so if you like why keep the key there if it doesn't need it? So in this point you can actually specify hey I know that this key is not gonna be needed after a certain point of view so I'm gonna take it away. Events is just handles how the agent talks back to the server. So a lot of times the agent will send messages back to the server in this out of band mechanism. You can still log to syslog and all that stuff for your regular logging systems but this allows us to kind of meet some of these compliance requirements that we've talked about. So the server, the Barbican API actually knows every key that's out there, where it's exposed, when it's being used, what applications are using it, how they're using it, all those types of things. Right, so a lot of times that's actually way more information than you actually need to support compliance requirements today but if you have it then you're much better off when your auditor comes knocking, so it's another section here you can see it's kind of executable so this allows you to specify who's allowed to access the key, what process on that box, right? And so you can specify how many available after restart so this works really well if you wanted to use your private key in Apache, right? And so you know that Apache's gonna restart and so you provide that key to Apache for some time after that process restarts. You can specify the name, owner, path, you can give it a hash if you wanna check to make sure that executable hasn't been tampered with, any of those types of things. It is worth noting that the agent runs on the box. If somebody has root on the box then you're kind of out of luck so we didn't really try to solve that particular scenario at that point you could just dive into somebody's memory and do much more but it works well outside of that, that particular kind of attack. This is certainly an increase over the just root on it and drop it in Etsy. Right, and so the last piece which is kind of important for the demo that we'll show here is this file system. So I mentioned that the agent uses Fuse which is a user space file system daemon and basically what happens is when the agent comes up it mounts a directory and it looks like a file system to your application but it's actually just an app. So every time your application makes a request to read that file system, the request goes to the kernel, the kernel turn around and sends it to us and then the agent handles that request and so that's how we're able to kind of say, oh okay well I checked to see if this particular request meets the policy and then if it does then I supply the key material. So you can specify a directory name, owner, and group. Listable basically just means if LS will work. If you turn it off then if someone tries to LS it your machine will panic and it will disconnect. Last little bit here before the demo, an example secret so this is pretty straightforward stuff. So we actually allow customers to specify an expiration date for all secrets whether or not they're generally supported. Right, so SSL certs already have kind of a date where they'll automatically die but AES keys for example don't. So if a customer wants to support rotation and you see this a lot in compliance requirements that you need to rotate your key every X amount of time then you can actually specify that the key itself will expire in the API and when it does the API will no longer provide it to any requesters. We don't delete it of course unless you ask us to but we can kind of support that. We have the secret itself which I've shown here it's just a base 64 encoded set of bytes we're kind of playing around with the best way to store all of these since there's a lot of different encryption formats. And then we store the type of secret in just kind of a bastardized MIME type. MIME-like type. Once we've nailed these down a little bit we'll go to INA and hopefully try to get them blessed. There's a few MIME types for PCQS like for certs but there isn't for general encryption. Right, so we'd like to get those added and the nice thing about that is it allows us to basically store whatever we want and this is why we call it secret as a service. So customers can upload anything that they want to us we'll treat it like an encryption key. You want to put your entire configuration file up there and knock yourself out. You want to put a username and password in there or don't care, right? So whatever a customer wants to put in there is fine but we do want to have additional functionality tied when we know what type of key material it is. So this is very important for SSL certs because we need to know that they expire when you tell them to do automatic renewals and all these other types of things and so that's how we know what these things are that's what the MIME type gives us, right? And so if a customer just uploads something we don't know what it is so we won't have any functionality attached to it but if we do know then we can provide additional functionality. And lastly the file system thing here so we saw on the policy here that you can specify a directory, right? So this one allows you to specify the actual name of the file. This presentation bit is a little interesting so right now we haven't set the file which just means we're gonna dump the contents of your secret to a file and you can just read it in, right? But one of the things we wanted to support was the ability to have the same key appear differently depending on who's consuming it, right? So say you have two applications one that's Java and one that's Ruby and they're both accessing the same database not totally uncommon where they're passing data between each other that you want to encrypt and you want to use the same key. Well, the Java application may expect it to be a key that's stored in the Java key store with a particular password whereas the Ruby file may just that thinks it means it in a file. And so this way the actual key is stored separately from the presentation that the agent provides, right? And so you can say, okay well for one I want to do it as a file but for the other I want to do it as a Java key store and the agent will actually provide those differently. So on the box when you look at it they will look different even though it's the same key material. And then lastly you can do monitoring group. So I think unless there, I mean we're gonna do a demo here and then we'll take some questions but if there are questions now I can certainly take care of them. Yeah. So the question is on the file system do you mean that the key actually being sent? That's true. Oh well. The question basically was is it actually a file that's being sent down to the agent? And in this case, yes it is. So at the end of the day the agent will request the key material from the server and the server will provide it because at the end of the day the application needs the key material. So once we pass that key material to the app we have no control over it anymore the app kind of does what it wants. We do support where it is Oh the cacheable? Yeah. Took it somewhere. I lost it. But there's a policy setting that you can specify whether or not that key should be cached by the agent. So if you specify cacheable as false then when you make a request to the file system it will block while the agent goes and gets the key and returns it. If you cache it then say your application needs that key pretty frequently then you may want to cache it locally so that you're not making these requests. But the key thing to remember is even though you're able to specify things like owner and group it's not a real file. It looks like a file. It smells like a file and it acts like a file but it's not a file. It's the opposite of duct typing. Yeah what happens when those kernel calls come in it comes to the code for the poster and agent and then I wrote that code and I can just decide what I want to do with it. Oh you're going to read okay fine here's your file. I make what looks like a file return to you. But it's always in RAM. And so even when we panic I just clear all the memory structures and quit it. Right. And so how we protect the file is both what the permissions are to the file itself and then also what the policy that is applied to it. And so you can specify by limiting the amount. So if you know that your application is only going to read the file once you can specify that you only want to allow it to be read once. If anyone ever reads it after that it violates policy and the agent does what's called panic. And so it sends a message back to the API and says something bad has happened. Cut me off. And so the API at that point will sever the pairing between the agent the agent no longer has access to key material and then the agent basically self destructs destroys its memory structures and dies out. And so there's at that point there's nothing on the server. So if the server was comped right of service compromised and somebody went into that directory and tried to access that key well it's already been accessed by the application so it panicked. And the only way to connect it back again is literally someone has to log in and hand pair that agent back to the API. All right. Any other questions before we do a demo? Yeah. So how do you secure the connection? Good question. So the question is how do we secure the connection between the agent and the API? We've talked about a couple of different things. So you could go all the way to do kind of a key exchange. You know we could just use so we'll use HTTPS would be the actual kind of transport mechanism. How are you doing it? So it doesn't necessarily need to. So we'll use Keystone to do the actual bootstrapping. So one of the major goals when we came when we kind of designing the agent is making it very easily deployable through Chef and Puppet. And so what we'll probably do and we haven't really totally kind of finished figuring out whether we like this or not but is that during the agent install you'll provide your regular kind of Keystone credentials so username and API key. And at that point we'll generate a sub user for that particular agent. And then every time the agent wants to talk to the API it's Keystone first gets token and then talks to us. But that token that it gets because we create a sub user we'll limit it to only being available to talk to our API. So even though you can get it so even if you comp did and you've somehow managed to authenticate as that agent the only thing that you get the ability to is talk to the API. Which if it's panic does already sort of puts you in a trajectory. And we didn't protect from memory. If it's always got root on the box and they can read memory it's game over anyway. Yeah, you're kind of screwed at that point. You've got bigger problems. Cool, any other questions before you go? Windows workload. Huh, what was that? Windows workload. For Windows? Yeah. So we have not done anything with Windows yet. But we will. So we were originally looking at building the agent in Go because Go is cool. I like that, it's nice. It's awesome. But there actually turns out that there's a team at Rackspace that is built and open sourced an agent framework. We use it for the Rackspace cloud monitoring agent that already runs on Linux and Windows and various other things. It's free, anybody can use it. Go check it out, it's called Virgo. It's on GitHub. And we may build on top of that. We're kind of working with them to kind of solve some security issues that we kind of want. We want it to behave a little bit differently but we might go that direction. And that would get us Windows access pretty quickly. Unfortunately, Windows doesn't have great support for views although it is kind of doable. So we still need to do some work to figure out how to make that work. So I will take the open source way out and say pull request accepted. Okay. Where's there's a DP API? Yep, in the back. So when you said that you're creating sub-users, you're creating sub-users at least now? That's the thought right now. We're still actively arguing about this. Originally we had created a pairing scenario where the Keystone integration only happened at install and then we created a bearer token. But then the problem was that we were basically building authentication into the Barbecan service instead of, and I was like, wow, this seems kind of stupid that we're duplicating work. So we'd like to actually use Keystone for that and we'd also like to use Keystone to deliver the policies since Keystone already has policy support for that. So I don't know, I'm not sure about that yet. Joe and I have been talking about it. We need to get into a little bit more. So does that assume that you're able to write? That we're able to create a user on install, yes. That is true, yeah. Like I said, still trying to figure that. But yes, I think we've talked about a couple of different ways. We've been back and forth because we want to not only integrate it with OpenStack but have it generally usable. So whatever we end up is gonna be a pluggable architecture to where if you want to do Keystone, you can do it this way. If you're running it on your shop just with legacy apps and the agent, you can do it that way. It's more to make sure that architecturally we don't cut off any potential avenues as opposed to picking one. But you're right, the read-only LDAP is a good point. We'll have to think about that one. Yeah, that is a very good point. Cool, any other, yeah, you definitely could. So at the end of the day, it's a little bit of just moving the cheese. If they have root on the server, it doesn't matter if I encrypt locally, like they can still, they can get that key. So we can make it as hard as we can. We've talked about that doing a KeyExchange on boot when the agent first pairs itself, doing a KeyExchange and storing that on the server and kind of signing those messages that go back and forth or encrypting them. So yeah, a couple of different options. We're just gonna have to kind of play around with it. I don't wanna make it so complicated that it's impossible to kind of install and configure. But it's definitely a good idea that even in memory, we can encrypt the stuff that's sitting in there. So if you look at the memory, you could figure it out, you could probably go find the key or something like that, but it's just one level more difficult. Yeah, we've definitely talked a bit about how we can structure this, right? And so when we talk to enterprise customers, one of the biggest ones, especially when you're talking about customers outside the US, is kind of data custody and where geographically that resides. So in RAC, RAC space is not a particularly difficult problem because all of our end points are localized. But in a lot of other cases that people might use this, that's not the case, right? So you'll upload to a general API, you don't really know where that data's going. And so being able to keep that key under your control so that if that data ends up in the US and it's subject to a Patriot Act, a request or a subpoena or something like that, we have to give it up, right? We're a US company. If they show up with a legal search warrant, then we're gonna give them the data. So there are some laws in the UK, for example, to prevent that from you're not allowed to do those types of things. Definitely, there's a lot of complicated use cases for kind of who owns the keys. And that's kind of why I really hope that we can standardize on an API or at least kind of come to an agreement to that because that way there are good reasons to pay for the encryption providers, right? They're not just charging you for the fun of it, although I'm sure some of them are. But there are reasons why you might want to do that. And that can be geographic distribution, that can be a bunch of different kind of backup stuff, all kinds of different options. Yeah. So is the data encrypted on the, on your server? So the question was, do we encrypt the data? And the answer right now is no. We do key management only. So if you think about it from the FIPS certification, we're FIPS level two, not three. So maybe, because it seemed to be like you're getting these secrets via authentication through a shared secret, that shared secret could just be used for a symmetric Cypher. You definitely could. And so like it would be certainly more secure if it sent the data to us and we encrypt it for them. The challenge with that really is scaling, right? So if you look at something like encrypted cloud files, we'll have customers that'll spin up, tens of thousands of containers and upload stuff to them and drop them all in the space of a couple of hours. And so if they had to send all that data to us, we would have to be bigger than cloud files to be able to do that level of encryption. So this way it allows cloud files or somebody else to kind of take the key and integrate it into their infrastructure and their scaling scenarios. And so it is a trade off from kind of security versus kind of performance and usability. We've talked about doing some stuff. So like once you get into the HSM land, it's really very secure if you send the data to the HSM to be encrypted because then the key never leaves the physical hardware device. Excuse me, and that's about as good as you can get. But that means that you literally have to send all of your data to a single place or a set of boxes on the network and that kind of starts to really limit the cloud scenarios. So yeah, I agree with you. There are kind of certainly ways that we could use the key material. Like once you get into that particular space, you know, it gets a little kind of complicated. So we've talked a little bit about it. At least at Rackspace right now, our feeling was that it would be better for us to do key management and let the other open stack projects kind of provide the encryption piece of it. But I certainly could see an encryption as a service at some point in the future where data gets sent to us and we'll take care of the entire thing. Does that answer your question? Or were you thinking the keys should be encrypted as they're stored by the key management server? That's what I was thinking. Oh, they are. They are, sorry. I realized that when you started going. So I answered the question that you did not ask. Yeah, but that other guy, you had a fandom. Maybe it was him. Yeah, so part of our, so one of the challenges with HSMs is they tend to have limits to the amount of keys that they can store. And so what we actually do is the key materials actually stored in a regular SQL database, but all of the key encryption keys, the keys that we use to encrypt data that's stored in our data store are stored in the HSM. And so right now we're kind of figuring out the best way to do this, but right now our thought is we'll have a key pertinent. And so we'll encrypt those elements. So like if you're a particular tenant, we'll encrypt those with your own encryption key that's stored in an HSM inside of our particular, and that key never leaves our service, right? That's only, that's what we use to protect our data. So it's another level on top of whatever the customer would do. All right, any other? Can we send things up in the next question? Well, so like I said, the HSMs are optional. So the question was, how do you do HA with HSMs? So there are some vendors. SafeNet is a good example that do have good stories around this. And so you'd have two physical devices that are paired, and then you can put those pairs in multiple DCs, and they actually have written their own replication stuff that replicates between them. And so you can set an API on top of those individual HSMs and kind of go that way if you want. We've talked about a couple of different options on how we do it, but once you get down to the HSM piece, it really depends on what the vendor kind of supports. Well, and a lot of this comes to policy in your risk level, like what you're protecting and how much you care to protect it. That's why like cashable versus non-cashable is in the policy. Because maybe you don't care about cash. So what cash is in memory? I'm okay if Root can make it through in the box and reads this out of memory because if they have Root, I got bigger problems. Yeah. Does that answer your question? Okay. All right, so let's go ahead and do the demo before we run out of time, and then if we got time left over, we'll take some more questions, if that's all right. So this is, I will mention this is very POC code. This is what Matt and I kind of wrote real quickly to kind of flush out some of these ideas. So while it is out on GitHub and you can get it, please don't run it in production. It is not secure in the slightest. Yes. Really just for us to play around with what the interfaces should look like and how the different bits should function. So. There's a reason my GitHub is called poster-poc. Yes. Please don't use it. So this is a quick little flask app that we put together. None of this code is actually going to survive the translation to actual production code. Our production code base is actually out there now. You can take a look at it. So you can see here we have a user that we've hard coded in here, a particular tenant that they're assigned to. We have a key, right? So you can see the file name for this key here is SecretKey. SecretKey. Then you've got the policy, which right now you can see we've specified max key accesses of one, right? So this would become important when we play around with this a little bit later. And lastly, if you go to events, there's nothing in here right now. So events are the messages that come from the agent back to the API. And this is kind of stored for audit log records, but also allows you to just kind of know what's going on. So that's the first terminal window we had there, which is the Barbican API running. So this is Postern, which is our, so this is a separate VM that has the Postern agent running. And so basically we've kind of let it auto pair just for the sake of the demo. So you don't have to watch us type in a bunch of passwords. And so basically the agent kind of paired to the API, sent an info log, and then it's done. It's now completed. So it is mounted the file system. So if we go back to the web UI and go to agent, it shouldn't show up now, right? So here's our agent. On host Postern? Right, and then if you go to event, you can see it's kind of put a little event here saying, okay, I've turned myself on. I'm ready to go. I'm providing key material. And then you can, it tells you it's enforcing for that secret dash key, right? And the host name. Yep, you still have that bug with it. Well, you know, I fixed it actually, but it's not in the, what's on this? I fixed it yesterday. I put a space, I'm missing a space. Give me a break. Can't take you anywhere. Whole request accepted, yeah. It's a space. All right, so we're still on the same, this is the same VM that Postern is running on. So it's running in one SSH test and it's to the other, right? So if we look at C keys, which is the default where we mount everything, of course you can put it wherever you want. You can see there's file called secret key, right? So from the application standpoint, this is a file on the file system. There's no difference. So one of the nice things about the way this structure is, if you're in dev and test, you can actually just copy a file onto the file system in that location, write your app, and when you go to production, it will be exactly the same, right? Except that now it just happens to be secure whereas before it was done, right? So we go ahead and cap the key. So we're gonna get it, right? Because we said that you're allowed to access this key one time, right? And so this is our one time, we've now accessed it. We can see it. So if we go back to the API or to the web UI there. Oh yeah, I was remembering from the last demo that it was beyond 10 minutes, but this hasn't been running that long. Yep, so go to the web UI and we look at the event. You can see now that we've allowed an access of the key, right? So this is where we start to get into the nice logging and auditing that we need for compliance. And say this key was provided on this box at this time and it was accessed by this process running at this time that started it when I have all the various different things that we wanna store, right? And one of the things that we'll support is to be able to dump this information as a stream into your SIM. And so if you have an internal security operation center that's already got a SIM as security information, the event monitoring, something like that. Then we can dump this as another stream of data into your security team and they can monitor that. So now if we go back to post turn, so now we have complete, we have done what our policy allows us to do, right? So now we're gonna access the key again. And you see we get nothing back and then if man actually, LS is that directory, it's gone, the kernel is literally destroyed. The file system no longer exists, right? And so the agent has panicked, right? Somebody tried to access the key outside of policy and so the agent basically raised it hand and said, no, no, I'm done. And it self-destructed and it told the server that it's done and so the server has completely cut it off. The pairing no longer exists. So if we go back to the web UI, if we go to the events, then you can see here a nice little panic message. And so this allows you to kind of, if something does happen outside of what you think it should, then at least you get a notification. We do have some settings of course where you can just log the panics and not have to actually destroy the key if you don't want to. So that's nice in kind of getting things set up originally. We were also thinking about having an option in the agent where you could run it in learning mode for a month or so, just so that it figures out how your application uses the key and then generates the policy for you. A little bit easier than kind of doing yourself. Yeah? So it's like a file, how do you say you can get it? But does something copy a file to a different file? So if they're able to access the key, they're able to access the key. So once they can copy it, they can put it onto a different drive, they can email it somewhere. There's nothing, like at the end of the day, the key needs to be provided to the application specifically because as we kind of talked about in the answer to the other question, we don't do the encryption. And so the level of protection applies in that you know how your application accesses the key. So you set the policy up so that only that is allowable. So usually the way these types of things work is your application boots accesses the key and then it doesn't need it anymore. So if somebody managed to compromise your server or if somebody managed to get a file inclusion vulnerability in your web application and they attempted to pull your key, that will no longer be in the same style that your application typically uses and so that particular access will be rejected. Right? So the goal is to limit the amount of time that the key is actually accessible so it's all about reducing attack surface. You can do things like say this binary, this user, this hash if you wanna get to that granularity can access the key. So you can get more granular than the simple case we did of just one time after boot. Yeah, and so these were some choices that we made where there certainly are trade-offs, right? So like if you go with a full HSM install and some of these other types of things, there are more secure ways that you can do this for sure, right? But this to us was a good mix between allowing the usability that we wanted, right? And fixing the problem that we see now which is 99% of the applications that we audit as security people have keys copied on a box, right? Or they're sitting in registered. Yeah, drop a Linuxy, root-owned, done. Right, so we can solve that problem and do a lot of good solving in that problem in addition to, but it doesn't solve everything and there are other ways that you could do it. It just, those other ways tend not to be available to everybody. All right, any more questions? Yeah, Hannibal by one. Yes, so we will support Cayman. Yes, so I like Cayman. It does solve a lot of problems. It's much more interoperable than some of the previous standards and that's to Oasis' credit. They did a much better job this time around. So we do like it. One of the challenges that we've seen with Cayman and some of the other PKCS standards is they don't always kind of handle multi-tenancy in a way that makes sense. But if we can support Cayman on the server, I would like to do that. If for some reason we can't because it doesn't quite meet the interaction model, then what we've been talking about is supporting some of these protocols like Cayman and some of the PKCS stuff in the agent and then have the agent translate it to whatever the API kind of understands from a REST standpoint. So you could install the agent on a box running SQL server, point SQL server at the agent as your PKCS store, your KeyMap store and it would just work because the agent understands the protocol. So we do wanna support as many standards as we can. I think the challenge that we've run into mostly is that in some cases they still have this assumption that a single entity owns the entire King material and that doesn't really work for us. So it looks like I think we're getting the single that it's time to wrap up here soon. Thank you very much. If I will be around, please come out to the front. Thank you.