 All right, I think we're going to get started. My name is Dustin Kirkland. I'm the CTO of a startup company at Austin called Gazang. I'll talk a little bit about what we do toward the end of the presentation, but we'll set that up by talking about the meat of this presentation. Really about encryption everywhere, everything, inside of OpenStack or Cloud in general. I've got some suggestions here, some ideas of way that we could do things better in the OpenStack community, but also some best practices that perhaps you can do as practitioners of infrastructure as a service, clouds, or as users of clouds for your infrastructure. Let's talk about what's changed, right? In particular, with respect to security, I've sat in this room for all the sessions today, and we've had some great content. So I think a bit of this is repeated. I believe we're starting to center on what has changed in terms of security with respect to the cloud. The big thing is that security is getting outsourced, along with your compute and your storage and all the other things that you're outsourcing to the cloud. You're also outsourcing your security, and that's something that happens in some cases without you even realizing it. Not only is it being outsourced, but it's also being distributed. In particular, if you spread your workloads across multiple clouds, or you're managing multiple clouds within your organization, the security that used to be in one place inside of your data center where you had established over the course of the 90s and the 2000s the log book that you had to sign in and the multiple forms of identification, all that's out of the door once you start dealing with cloud computing. And this sounds a little at odds with one another, but not only is it distributed, but in some cases it's consolidated, right? So you've managed to get everything now into a single cloud provider, but they have a breach, but they have some endemic issues that totally jeopardize the security. You've now centralized it. So it's kind of that weird combination of outsourcing, distributing, and consolidating in some cases. Ultimately, you've given up some of your usual barriers, so your physical barriers to your infrastructure. Automation and orchestration are absolutely essential to cloud computing, and that's really at the heart of it. We talk about configuration management and orchestration. It's beautiful. It's wonderful. It's absolutely the only way to deal with the number of systems that we're talking about, virtual machines, physical systems. You've got to automate and consolidate things. But with automation and orchestration comes a handful of new challenges, new issues around security. The fact that you no longer can enter a password every time you need one system to authenticate with another system to do some process, maybe a backup or federate some exchange of information. That has to happen without passwords. It has to be non-interactive. It has to be automated. And then simply the numbers game, the numbers of systems we're talking about. That's completely changed. If you were managing a data center 15, 10, 15 years ago, even five years ago, you probably weren't dealing with the number of logical system units, the logical deployments of operating systems that you deal with now that we're in a cloud heavy world where you just spend up a new VM and every new VM comes with its own security challenges and handling that is an important new consideration, something that has changed. And finally, I guess we could go on and on and I guess that's why we have an entire track dedicated to this today. But the last one that I'm gonna talk about today is randomness and I've actually got a second presentation just after this one on entropy. Some ideas of how we can actually improve the entropy pool inside of OpenStack. But I'll just leave the leader here in our plan of seed here. Some cloud computing practices render virtual machines completely predictable and that's just the nature of the beast. I'm not gonna go into that in this presentation if you're interested in that. Come back in about an hour from now. Back to that numbers game. So we've exploded the number of systems that we now have under our management that we talk about. With that comes a proliferation of the sensitive data and every machine has sensitive data. Inside of Gazang, we tend to call that machine DNA. It's the little pieces of that machine that make it unique, distinct, different from every other machine that either you've launched or your tenant next to you has launched. The obvious ones, the big ones, the important ones are your private keys. The SSH, SSL, if you're using Kerberos or some cloud equivalent to Kerberos, GPG. Any of these, they have some bit of private key material that's stored or generated on that box. The generation, that's back to the entropy problem, but the storage is really at the heart of this presentation how to protect that storage. If not private keys, which every machine at least has a private host SSH key, but if not that, configuration files, the stuff that's in Etsy. How much of that you consider for a second, how much of that would be harmful to your business if your configuration of a given system leaked. There may not be anything in there that immediately gives access to the system, but it certainly would allow a sophisticated attacker to build, construct and attack against the system knowing how it's set up. Log files are just treasure troves of information, some private information, some sensitive information, more information about how the system is configured, what's going on within it. And these are just a few examples, but VAR lib, you know, most applications tend to put their varying state information in VAR lib or VAR temp or VAR cache. That's machine exhaust. I think some of the config management or some of the monitoring guys would call that machine exhaust. It's just information that's being created by your application. In many cases it's important, and in some cases it's bits of information that's in that, but consider that for a second, just in terms of what's sensitive that your application's processing. User data seemingly obvious, you know, maybe you've got information in home, maybe not, but that's kind of the equivalent from a human perspective to the application data. It's the stuff that makes that user use, or what that user needs to use that machine. The application data is what the application needs to do its work. Password hash is various. I mean, the list goes on. This is just a, you know, an example to kind of get your, start thinking about what's important on my hosts and on my guests. And unfortunately, disks do disappear. This is some statistics I ran across the U.S. Department of Health and Human Services publishes a wall of shame. It's actually pretty entertaining. You can download a CSV file of all breaches that have been reported to the Health and Human Services department, which by law you're required to report any loss that involves 500 people or more has to be reported by law to the Health and Human Services. And anyone can download a CSV file of the breaches, not of the data that was leaked obviously, but of the report itself. So I loaded that up into a spreadsheet, ran some stats on it, and, you know, we used this a bit for our marking material within Gazang, but this is absolutely accurate. So I went through the 480 reported breaches and there's this free form field to the far right of the database that for each incident there's an explanation given for what happened. And I classified those into one, two, three, four, five, five different categories. Theft of device or physical media accounted for 55% of the 480 breaches. Another 12% from lost devices, so not theft. Theft was clearly something that was malicious. Lost is just, you know, whoops, we don't know where we put it. Another 5%, which actually a lot of these had improper disposal as the label and that was to me different than lost or theft. But in all three of those cases, we're talking about media that walked out of a data center. And that's a little bit frightening to me, especially as a CTO of a company that does work with cloud providers. We put our company information into that cloud. We hope their security practices are good. We hope they dispose of the disks in a meaningful or in a secure manner. We've got SLAs, we've got contracts, but I mean, at the end of the day, do you really know? My point with these statistics is that 72% of these 21 million healthcare records that have been exposed could have been trivially solved by encrypting the data at rest and storing the key somewhere else. That's really the take-home from this presentation. If you're doing work as a guest in a cloud, if you're running a cloud, I think we've got a really awesome mix of people at this OpenStack Summit of practitioners who are deploying cloud services themselves and people who are trying to leverage that as guests of those clouds. The point of this presentation is both of you need to be doing encryption, doing your part encrypting that data and ensuring that the levels of protection are in place. And that's the answer to this question. So what do we do? So as a host, as a cloud provider, you really have the responsibility to protect your customers' data. Your customers are entrusting you with this, if not explicitly. It's certainly implicit that they're implying that they're trusting you with this data. As a guest, well, you need to do what you can do to protect your own data at rest and use encryption within your guest. And then on the wire, using appropriate security measures, encryption on the fly as well. So we can dive into each of these. So comprehensive encryption means protecting all layers of the cloud stack. And I listed three here. We could have broken this down into a lot more, many more different layers. But just at the highest level, the network layer, I think SSH seems to be pretty well understood. I mean, no one's using Telnet anymore, right? No one's used Telnet, and I hope not anyway. I think we understand that one pretty well. But TLS and SSL, I guess, more commonly in practice. But SSL, it's used correctly in some places, but certainly not everywhere. At the file system layer, there's a handful of technologies that will look at eCrypt FS, TrueCrypt, Z Encrypt is a product that my company makes based on eCrypt FS. And then below that, at the block storage layer, DMCrypt encrypting the block storage. And in some places, one is more appropriate than another. But in all cases, both the guest and the host need to be doing this work. So SSH, well understood, there are actually some helper utilities that are quite useful when managing SSH. SSH copy ID is a cool utility that allows you to push your public key to another site. I think we're pretty good in OpenStack through Keystone, allowing users to upload their keys. But I remember when I first started using Amazon, Amazon forced me to generate my public-private key pair on their site. So clearly they had my private key at that point. That just didn't seem right. I'm glad we've taken a different route in OpenStack with Keystone. But being able to generate your key local, a key that you protect, use reasonable measures on, and then push it out. Copy ID is a great way of pushing keys. Import ID, SSH import ID is an interesting way to pull someone's key. So we do a lot of shared access, shared terminals, at least in the development environments I've been in. SSH import ID and then give someone's username or URL where you can fetch that public key from. It'll securely pull down that public key installed in the local user's authorized keys file and set up that authentication. That should say DNS host records are not DHS, not Department of Homeland Security. No, this is DNS. So DNS host key records, SSH, SSHFP, SSH fingerprint is actually a DNS record that can be used to propagate the public fingerprint, the key fingerprint of the host that you're SSHing to. So we've got some scripts that handle that with Dine DNS for us, a little-known feature of DNS. Does anyone else use SSHFP? It's really cool. You know the question you get when you SSH to a system and says, you know, this is a new fingerprint, I don't know, do you trust this host? You can actually have SSH go to DNS and look up the fingerprint if you set it up that way. And skip that question securely. The SSH will verify that for you. It takes a bit of infrastructure to get that tied to your dynamic DNS provider, your bind, if you're running your own DNS bind itself. But once you've got that set up and automated, it's a really neat feature and really can give you as a sysadmin confidence that you're actually SSHing to the system that you think you are. You know, most of us just type yes, right? If we're really paranoid, we'll go to the console, grab the SSH fingerprint, and we'll check all 32 bytes match. But, you know, that's a little bit clumsy, especially in a world where we try to automate everything. From an SSL perspective, many if not most, I'd say most applications support SSL, but it's rarely enabled by default. And I attended some of the Keystone talks earlier today. It seems that, you know, a lot of OpenStack even has support for SSL communications basically everywhere, but we don't always enable it. We certainly don't enable it by default in OpenStack. And, you know, it's a little bit sad. Part of it is that the PKI piece isn't fully well-baked. It sounds like for the next release, for the Grizzly release, there's going to be some work around PKI and OpenStack. Really, you need, you know, good sign certificates from one of the real certificate authorities. And SSL needs to be enabled. You know, some people are concerned about performance impact of SSL, but, I mean, with modern processors, with modern processing, that's such a small piece of the full scope of what's going on in a system at this point. The encryption is, especially with some of the cryptographic accelerators like AS and I, that part is almost free nowadays. Key management's really the hard part. I'll talk about key management a little bit toward the end of the presentation. But with respect to SSL, that can and should be enabled everywhere, everywhere that you've got the opportunity to. And then in a production environment, you really need, you know, real root sign search for those applications and those systems. At the block storage layer, which is the lowest layer of the storage stack, DM Crypt is a well-understood technology within Linux. It's been the Linux block driver encryption since the early 2.6 Linux kernel days. It's extremely fast. It's very efficient. It's a bit of a blunt tool. It's an all-or-nothing sort of solution. You either encrypt the entire block device or you don't. You have to make that decision upfront. You can't take a system that's been running for a while and say, yeah, I want to go ahead and encrypt that disk. There's no way to do that live or on the fly. But, you know, the entire melt point can be encrypted. And that ensures that a key is loaded at mount time. And all the data written to the disk is handled by the Linux kernel driver for DM Crypt itself. It's encrypted before landing as blocks on the disk and decrypted as it's presented to the application layer. The beauty of that is no application needs to know that it's running on a DM Crypt encrypted device. The trick, though, is getting that key loaded at mount time. So it's a little bit hard to do the root device using DM Crypt unless there's an administrator who's present who can type the password or passphrase or if you've got some device or dongle that can provide that. Or you've got a network-based key manager like an HSM or, you know, a software-based equivalent to an HSM. But as difficult as it might be to do that on the root device, doing it on the auxiliary storage is actually quite easy. You can do that on your slash server data or mount or wherever your auxiliary storage is happening, your secondary disk. And this is, I would recommend, a really good way of protecting the disk that's having your local instant storage. The thing that, you know, when you clone a VM from or an image, you pull an image out of glance. And with an open stack, we pull it out of glance and we initiate the new instance. Having that in an encrypted volume, I think, is the next logical step in the way that we protect the customer or the user that instance's data before it's written to this local physical disk. With AS&I, AS&I is a new set of cryptographic accelerator instructions on modern Intel and AMD chips. It's awesome. I mean, really awesome. It takes the computational complexity of encryption and puts it to, you know, less than 1% of impact on the workload. In some cases, zero. I mean, I really don't even see the encryption happening on my heaviest workloads when my chip has AS&I and I'm using DMCrypt. And then the one other somewhat of a downside or potentially downside is that a single key is used for the entire mount point. In some cases, that makes sense. In other cases, that doesn't suit the problem very well. But it's really easy to set up, really fast, really efficient, and there's no reason why we shouldn't be doing DMCrypt by default on the auxiliary storage, on the extra disks that are attached to a given system. Built on top of DMCrypt is a package that I co-authored with Scott Moser. Scott's still in here. He was in here a few minutes ago from Ubuntu. We're both Ubuntu core developers. Overlay root is a combination of DMCrypt and overlay FS. And it provides a union mount. So a union mount has two directories. And one is the read-only directory. One's the read-write, and they're sort of merged together. And you get this one sort of presentation of what the file system looks like. Ubuntu has been doing this, Ubuntu, Nopix. All the live CDs have been doing this for years. You get a CD-ROM and ISO that's read-only. But you've got this other bit of storage, maybe it's in memory, maybe it's on an extra disk or a USB stick that is read-write. And then the two emerge together so that it looks like you've got a disk that you can read and write data from. Most of it's coming from the read-only storage. Your changes go to the read-write storage. Overlay root is that exact concept inside of a virtual machine instance. And this is already in the Ubuntu 12.10 release that actually released today. Overlay root is one of the features in the new cloud images from Ubuntu. It's one single line you need to configure and it's the overlayroot.conf. Basically, you can tell it there's a handful of flags. This is how you set up encryption. You say crypt, that's a flag. colon separates the different parameters as a handful of parameters. You need to give it a dev, a device. Dev equals dev xvdb. This is the secondary disk that's associated with this instance that I did this one in Amazon. Just as easily dev could be some other disk or even tempfs. You could do this entirely in memory. So overlayfs provides a union mount of all of the root file system where most of it's coming from the lower read-only directory. That's the pristine stock Ubuntu image. That's the AMI that everyone's running. If you're running 12.10, you're all, everyone's running that same image. But there's a second directory that's mounted. It's called the upper directory. It's mounted read-write. And it's mounted on top of a dev mapper secure. It's a dmcrypt, basically. So we have the primary disk is mounted read-only, xvda. The secondary disk dev xvdb is dmcrypt encrypted and that creates a dmcrypt device at dev mapper secure. And then that's mounted on top of media root read-write overlay. And this one's mounted on top of media root read-only. And then the combination of those two provides a place where when reading, you get the combination of the two. When writing, all the changes go to the upper directory, the read-write directory. And so you start with this pristine image just like you would do with a live ISO, a disk. But all of your changes, all your var log, all your edC, all your var lib, everything that happens on the system after boot is written to the other disk and is encrypted before it lands on the disk. So this is available today. You can do this inside of OpenStack, inside of Amazon, inside of Rackspace, inside of anywhere that has Ubuntu 12.10 images loaded and the overlay root package installed, which is installed by default now. This can be done pretty trivially. At the file system layer, eCrypt FS is sort of the other way of doing file system encryption inside of Linux. So there's dmCrypt, which does full block encryption. And then there's eCrypt FS. eCrypt FS does perfile encryption. And full disclosure, I'm one of the authors and maintainers of eCrypt FS. I'm trying to do this as objectively as possible, but it's hard to omit that small fact. It's file system encryption, but it's on a perfile basis. With eCrypt FS, you mount one directory on top of another directory. And the upper directory to applications and user space looks like clear text data. It looks like files and directories and just like any other directory, recursively down below that. But that's only a virtual view of those files. It's only a virtual presentation of that data, that information. If you've got the keys loaded into the kernel key ring, you can see that data. You read and write it as if it's just normal files and directories. None of those files actually exist there in that clear text. It's only a virtual representation of those files. What actually happens, the kernel does a translation between that upper directory and the lower directory. And before it lands on disk in the lower directory, they're actually encrypted or decrypted depending on the direction you're going there. Each individual file can be encrypted with a different key. Each individual file can have its own policies, its own policies for those keys and such. And some files can be encrypted while others are not encrypted. I think this would be a really interesting way of doing encryption inside of OpenStack at the host level where we'd use eCrypt FS and encrypt different images, different backing disks with different user keys. A user could load their keys or have their own unique or distinct keys, have them managed by Keystone or another key store. And before that data is read or written to or from disk, you know, the appropriate keys are loaded by that process. Yeah, question. So performance hit on eCrypt FS, it's really hard to benchmark, but we tend to quote a 3% to 5% performance hit. It really depends on the workload, how much reading and writing you're doing. It's a fairly memory intensive operation. There's one other part, the AES and I support isn't yet working right with eCrypt FS. It's coming. We've got patches on the kernel mailing list. We should see that in the next couple of months. It can be much worse than that. It can be 10 or 20% depending on how intense the reading and writing happens. In general, the profiling that we've done around disk encryption, be it file or block device encryption, if your disks are anything less than extremely fast SSDs, the seek time to find the next block often fits within the CPU time for encrypting and decrypting that block. So in some cases you don't even notice the difference. Good question. I think it's worth benchmarking. Sorry, it's good feedback. Robert? No, it's seekable. So eCrypt FS is block encrypted. There's 4K pages, so it's very seekable. The other advantage of eCrypt FS is incremental backups are possible on a per file basis. With the encrypt, if you want to backup the encrypted data, you got to send the whole volume wherever you're going to go, the whole block device where you're going to go. With eCrypt FS, you can back up and R-sync or incrementally back up on a per file basis. Now, if the one file is 20 gigs and you've changed that, unless you've got R-sync configured in such a way that it can send bits of a file. But from an eCrypt FS perspective, each block is encrypted separately. Another question. This was introduced in Sandy Bridge, I believe. Excuse me? Nahalem. Excuse me? Westmere? Okay. I'm not an Intel engineer. When AES appears in PROC CPU Info, your CPU has the magic bits necessary to... or the silicon necessary to do AES and I encryption. You need to mod probe AES under bar Intel or AES under bar AMD depending on which you need. And then the kernel takes it from there. If it's DMCrypt, you'll see a big performance increase. With eCrypt FS, you won't yet, but we're getting close. We've got the work. We just need to get, you know, it merged into the VFS layer. Another question? So file... That's a great question. So I'll repeat the question. I forgot to do this for the others, but I'll repeat the question. In eCrypt FS, do the files look just like the decrypted files? Do the encrypted files look just like the decrypted files from a metadata and attributes perspective? eCrypt FS, yes, DMCrypt, no, it's just encrypted block data. And sometimes that's what you want. With eCrypt FS, the owners, the file system or the file permissions, the ownerships, the timestamps, the rough size, you know, encryption does pad the data. It takes up a bit more space. All of that's visible to an attacker, maybe who sees the encrypted data. They can't see the data. File name encryption is an option. You can turn it on or off. There is a bit of a performance hit for encrypting file names because every seek or LS in a directory, every LS in a directory or search or a find, you know, you got to do a whole lot more decryption than you normally do. You can encrypt the file names. There's also a limit because the encryption necessarily involves padding. The normal Linux 255 character file name limits is down to like 180 or 200 character long file names. The Linux path max is 4096. You can hit that a little bit faster if you've got a lot of directories with encrypted file names. So you can encrypt file names to that part of your question. The file system attribute or the file attributes are not encrypted. And so it may reveal that it looks like you've got a file with permissions of this and ownerships of that. Sometimes you want that, sometimes you don't. It makes it really nice for doing backups. It's easy to back up the encrypted data and know that the contents of those files are encrypted. But if you're trying to protect the structure, if you have a, you know, foo.jpg, you can see that's name is foo.jpg. EcryptFS can be used on both the host and the guest layer. I think, you know, I think it's worth benchmarking and seeing how this works in a big production environment. DMCrypt might be the better way to go for protecting the backing disks. Inside of the guest itself, I use EcryptFS all over the place to protect data that may be in just an ephemeral image, a launch and instance. And this instance is only going to last so long, but I want to make sure the data is protected before being written by the hypervisor to the underlying storage. And this is an easy way to do that. I can load the key in the kernel memory and discard it. The key never lands on the disk. So just briefly to introduce ZnCrypt. ZnCrypt is our commercially supported version of EcryptFS. Essentially, we'll have support for DMCrypt available later this year. It's a wrapper for EcryptFS. The open source EcryptFS is the core of our product. We've certified it with several enterprise workloads, including CDH, Elastic MapReduce, MongoDB, Cassandra. We've tested and benchmarked it on tons of other workloads, namely OpenStack, GlusterFS, React, MySQL, Postgres. Our customers come to us. They need a commercially supported version of an encrypted file system, something that's backed by a vendor. We've got easy installation devs, RPMs. We've got young repositories after repositories for that. And we've bundled support for remote key management. Key management is really the hardest part of all of this. Encrypting stuff's easy. It's where do you put the keys? How do you put the keys in such a way that you can reboot your system? It fetches the key and can mount that data. Oh, and by the way, do it in a way that your attackers can't just do the same thing to simulate that. That's really the hard part here. And in some cases, protecting it from your cloud provider, your cloud provider, or maybe that was a Freudian slip when I said DHS earlier. Maybe you're protecting your keys from Homeland Security or some other government body elsewhere. Unfortunately, right now, there's no, you know, for this space, for OpenStack space, there's no complete comprehensive open source solution to the global key management. Keystone doesn't really fit this. It's similar. It's easy to get confused on it. Keystone really, really handles identities, but not just the generic opaque cryptographic objects. The traditional way of solving this problem are hardware security modules. But hardware security modules don't map particularly well to cloud environments. When a tenant needs the equivalent of an HSM, but they only need it for a few days or weeks or months. Hardware security modules are literally that welded shut, one-U rack-mountable systems that you buy. And if you've got physical infrastructure, they're great, and they've been around for ages and they're an excellent way of doing things. But as we move to cloud, HSMs don't map particularly well to that. Our solution is something called Z-Trustee. It is a comprehensive key management solution. There's a server, there are clients that put and get their keys from that over a secure channel. You actually have the opportunity to assign trustees to those keys. And trustees are basically another multi-factor authentication. They authenticate with the system and allow or deny the release of those keys. So, in conclusion, encrypt everything everywhere. On the hosts, encrypting the network layer, ensuring that OpenStack or your cloud system is set up in a way that takes advantage of TLS or SSL across the board. When you're doing testing, sure, self-sign certs are great. Once you go into production, it really means having real SSL certificates or setting up a PKI, public key infrastructure, and federating those ahead of time. It's a little bit harder to do it that way, but depending on the size of your infrastructure, it might be a lot cheaper than buying, you know, 20,000 certs or wildcard certs from Verisign or someone. Encrypting the backing disk images, you know, in some cases, the disk images themselves are sensitive and others, it's once the instance launches and the instance data needs to be encrypted. We've got opportunities to encrypt lots of things in OpenStack and I really hope to see us move that direction in the near future. Compute, Swift, Glant, Sender, Sef, each of these have a layer that writes sensitive data, sometimes sensitive data that's outside of its control to disk and each of those should go through a DMCrypt or a new CryptFS layer. Within the guest, taking control of your data and ensuring that your data is protected, you know, the network layer, that's just federating your applications together. You've got that same responsibility. It's not at the OpenStack layer. It's whatever your apps are that are talking to one another, maybe over a REST or REST-based API or web service. But, you know, the locally generated data before it's written to disk, if you don't know what your cloud provider is doing or maybe you don't trust that cloud provider necessarily using a DMCrypt or any CryptFS to push that data to the lower storage encrypt, it's important. Then across the environment, we're managing the keys centrally and securely using a key management solution. If, you know, some people bake their own or roll their own, maybe we'll see one in OpenStack at some point. And then just briefly, the advertisement, where Gazang had quartered in Austin, Texas. Start-up, we do encryption key management for big data and cloud. Most of our customers are trying to protect their data for HIPAA regulations. That's healthcare, FERPA, student records, PHI, PCI, PII. This is all the personally identifiable information. Our customers have that requirement and they come to us to help them ensure that their data is encrypted, sometimes in their public cloud solutions, sometimes on their local site. That's that. I'll take any other questions now. Yes, sir. Yep, eCryptFS is a Linux kernel module. It's a Linux-only offering. TrueCrypt would probably be the closest equivalent to eCryptFS. That's multi-platform. Our focus, Gazang's focus is Linux exclusively. I understand some people have, you know, other OS requirements. TrueCrypt would probably be the best bet where multi-platforms is a necessity. DMCrypt is also Linux-only, but it's been supported for a long time. This isn't new. eCryptFS landed in the kernel in 2006. DMCrypt a year or two before that. So they've both been around for six, seven, eight years. They're in all the major distros. REL supports it. Ubuntu supports it. Yeah. Yes, sir? Sir, you said NIST test. Right, so FIPS 140-2 is the cryptographic standard from NIST, right? So no, we haven't. We use AES under the cover. So neither eCryptFS nor DMCrypt, neither of these do the encryption themselves. They offload that to the Linux kernel's cryptographic layer. So the crypto APIs in the kernel provide AES among other triple-des, AES, Blowfish, all of these. We use AES 256, excuse me, AES 128 by default. Gazang uses AES 256 by default. It's the, we've written no crypto in eCryptFS. That's offloaded to the Linux kernel. So the question is, has the Linux kernel been through these certifications? My guess is probably, but I don't know that one definitively. Any other questions? Yes, sir? TPM support is, yeah, if you want to use it, you certainly can. We don't have any bundled support for TPM. TPM would store your root of trust, right? That's left as an exercise for the user. I mean, our consultation services, we've helped users leverage their TPMs. There's also no virtual TPM, and most of our customers are running in a cloud, in a virtual machine. So there's no virtual TPM. You would need to do that. This would only be supported on real hardware. And then you would use the trousers interface to get, pull those keys, retrieve those keys from the TPM. It's important to understand what that protects. When you use the TPM, you're basically binding that disk to that chassis, right? And if you move the disk from the chassis, well, now the disk's key is over in that chassis, and you can't get to that. But if you move the disk with the chassis, someone steals the entire physical machine, then you don't really have that protection. So it's just important to understand what the TPM does and doesn't protect. Maybe that's what you want. Maybe you're just worried about someone literally pulling a disk, because it's a lot easier to stick a disk in your pocket than a 4U rack mountable server. So the answer to that question is, when hardware is present, and you've got the trousers library, sure, you can put and get your keys from the TPM on the local system. It takes a wrapper script, a small shell script to do that. Anything else? Good. Well, thank you very much. Appreciate it.