 Okay. This is the session on security. My name is Raghu Yaluri and I'm a principal engineer at Intel Corporation and I drive a lot of the security solutions development and path finding. I have one primary job at Intel. I don't want people to tell me that security is the reason why they're not migrating their tier one, tier two applications to the cloud. Okay. As a company, Intel is very heavily focused on making sure that there is foundation of trust as more and more workloads are moving into virtualization and cloud. Okay. When we talk about trust, it is across compute, storage, network and apps. And once the instrumentation is there in the platform, you make sure that it's enabled at the hypervisors, enabled at the cloud operating system level. We are very heavily involved in OpenStack to ensure that number one, OpenStack is secure and number two, OpenStack actually manages workloads and clusters in a trusted and secure and verifiable way. Okay. My partner today, JK, couldn't come here for some personal reasons. He's from our McAfee division and I'm going to try to do justice to a few of his concepts that he was going to cover. Okay. The way I'm going to do this, I'm going to spend maybe one or two slides talking about the challenges in cloud. There are many, many. At least in the context of this session, I'm going to talk about the three or four challenges that we seem to care about a lot and what we are doing at Intel to address those challenges. I'm going to talk about three solutions. Some of you might have heard of trusted pools and trusted compute. I'm going to talk about boundary control in this abstracted cloud environment through hardware-based asset tagging or geo-tagging. And then the third one is about VM protection. You are in a multi-tenant cloud. How can you protect your VMs from the service provider, from insider threats at the service provider? Okay. We're seeing a lot of interest in these solutions. They are in various stages of release soon to be released in development. I'm going to walk through all of that. I'm sure there's nothing new here for anybody who's been looking into cloud. Big challenges as a service provider, how do you meet compliance requirements? Until a service provider does this thing effectively, tier one, tier two applications will not move into a cloud. As a tenant, how do you protect your workloads from insider threats? The easy answer is encryption. But who holds the keys? Enterprise wants to hold the keys. So how does that model work? And how do you ensure that you're releasing keys to the right entity before your data, your workloads get decrypted and executed? Okay. If you look at some more specific questions, how do you trust the servers on which your workloads are going to run? They're all with somebody else. They tell you, hey, trust us. Your workloads are going to be safe. They're running on trusted infrastructure. But how can they prove that to you? How do you know the physical locations of these servers? There are many business segments where boundary control by geological location is very critical. Not just geological, but even by functional boundaries. You know, if you look at the PCI DSS guidelines, an entity that's doing transactions cannot coexist with something that's just doing front-end web pages. You can't put them together on the same physical server in a virtualized environment. How does a cloud provider ensure that happens? And the last one that's very critical for enterprise readiness of any of these cloud environments is your ability to audit the configurations and the policies in a almost near continuous kind of a monitoring model. Intel believes that the concept of trusted compute that we have been trying to derive for the last two, three years starts to address some of these questions. So what's trusted compute? We've been trying very hard to create a new security control in the data center called platform trust. And all the hosts in a cloud that have demonstrated platform trust would be aggregated together logically. And they are segregated from hosts that cannot assert their platform integrity. Once you have the trust, then you can start using policies. Policies that are encoded into orchestrator in OpenStack are policies that are done by a separate policy tool that interfaces with the orchestrator to ensure that your workloads only run on trusted servers. Then on top of that, you can actually get visibility to the integrity of that server. You may not know which server on which your workload is running, but you will be able to query your service provider and say, tell me the integrity of this server and you're going to get a verifiable manifest that shows the integrity of that server. And then you can use that information for audit, security monitoring, all kinds of compliance requirements. The Intel technology that makes platform trust happen is a technology called TXT, Intel Trusted Execution Technology. I'm not going to get into a lot of details of that. If you want, you can go to intel.com slash security and you can find a lot of information about TXT. But it is a very pervasive technology in the platform. There are new instructions in our CPUs, there's new silicon in our chipsets. The biases have been updated to use TXT. All the hypervisors, VMware, all the open source hypervisors have been enabled to use TXT. And a lot of the tools about that now are beginning to use TXT as a security technology to assert the integrity of the platforms. Once you have TXT, it does this thing, then you need an attestation mechanism so that any time you want to know the integrity of that host, you can ask the attestation system, tell me if that server X or server Y is trusted. The answer is yes, the open stack scheduler can then place your workloads on that. So what's the capability roadmap look like? At the bottom of it, we have this capability called trusted boot. That's what uses the very foundational Intel TXT technology to assert that you have launched your operating system on a trusted server. And you can actually prove that to somebody that you launched. Everything in the boot process is what you expected to launch. There is no question of somebody inserting an out of complaint component or some tampering of any kind is avoided in that process. That is there today. It's been there since the Folsom release of open stack. Trusted pools and trusted boot has been there. Now we are building, we are not stopped. We have not stopped there. We are building on top of it now. We are beginning to add a new capability, what we call geotagging or asset tagging, whereby you can now store a piece of descriptor in a secure location on the platform. The descriptor can be latitude, longitude, altitude from a GPS system. It could be logical geographic information like country, state, capital kind of things. Or it could be functional like this is finance versus HR versus something else. The key is that descriptor is secure on the platform through the trusted process of launch that descriptor is made available to the open stack orchestrator. So now the open stack orchestrator can use that to create segregation of workloads, boundary control of workloads. I'll give you one quick example. If you are doing any of the FISMA workloads, the federal government workloads, they cannot cross the continental US. With a descriptor like that, open stack scheduler can now make sure that if it's a FISMA workload, it will not move from the open stack, the federated open stack clouds you may have here and any other places. So you don't have to get involved in the process of controlling that. Once you set the policies, once you have this descriptor, it's made available. I'm going to get into a lot more detail on geotagging as we go along here. Now building up the chain, I have the platform that's trusted. I have the ability to do boundary control. The big request we got from service providers and enterprises is, hey, can you provide the same level of trust to the VMs and the workloads as well? Just like you can assert the integrity of the launch of the host itself, can we measure and attest the VMs themselves so that we know when a service provider is launching our workloads that number one, they are being launched on trusted hardware and number two, they are launching what we want them to launch and they can prove to us that they are launching the same thing. So that's what we call tenant-controlled VM launches or VM encryption decryption. And the idea is everything about the VM is encrypted. You own the keys in the enterprise and you release the keys to the service provider when the service provider can assert that they are launching it on a trusted piece of hardware. I'm going to walk through the architecture for that one as well. And finally, the end goal for us in all this is be able to provide you runtime protection. Just like these things protect you at launch time, you should be able to ask a service provider at any time, my workload that's running on whatever server at your environment, is that still running on a trusted server or not? And the service provider should be able to give you a piece of manifest which says, here is a proof for you as of 15 minutes ago, the server asserted its runtime integrity. If you were at the Mirantis session yesterday, there was a guy from Nebula who was talking about how complicated runtime integrity is and how complicated computer science problem there is. No disagreement, it is. And we still think it's a hard computer science problem. I don't think all of it is going to be solved anytime soon, but we think there are some low hanging fruit that we can touch there. Sorry, question? Good question. TXT as a technology has been available since... The question was what percentage of hardware providers have this service providers? I'll answer that in two ways. The hardware vendors have started supporting this technology since late 2009. The operating system and the hypervisor vendors late 2010 is when they started enabling it. So as of now, everybody has it from a hardware provider side. Service providers are beginning to take on this thing. Again, I can't speak for Amazon or Rackspace specifically, but the interest and the customer demand for this is quite high and every service provider is beginning to think about having this technology as the refresh cycles permit. Is that good for an elusive answer? Any other questions before I get to the next level? Like it says on the bottom, we are actively targeting Juno for at least the bottom three. If we don't make Juno, at least hoping we're going to make it the careless. So I'm going to drill a little bit more into the geo-tagging stuff. So the question that people ask me is what is geo-tagging? The concept is pretty straightforward. One or more tags which are name value pairs that are bound to a unique ID on the host digitally signed by an authority of some kind and a hash of that, either a SHA-1 or a SHA-2 hash is taken and it's stored in a non-volatile index on a TPM on the hardware. And this index is a read-only index and it's a persistent index meaning it doesn't disappear between books. So that is, and if that information happens to be geological information we call that is a geo-tag. But if it is anything else, it is a standard asset. That's the nomenclature we've been using. And there are quite a lot of changes, I shouldn't say quite a lot. There are a few changes in OpenStack that we are doing to make this tag number one visible through the stack and number two, the orchestrator can make use of it for its placements and migration decisions. So the principle of operation is pretty straightforward. Number one, you write that index into the NVRAM. I already mentioned that one. Number two, when the platform is booting up, it uses TXT and does its measured launch or a trusted launch. If the trusted launch didn't happen, we can get a proper attestation of it. That geo-tag is a suspect. You don't need to, you can use that. But if the trusted launch happens and the attestation is good then you have a legitimate geo-tag that's made visible to you. Now the next thing is, yeah, question? It's a standard provisioning tool that we provide or any ISV or maybe an open source also. You do two things. You can create, actually can I defer that question a little bit? I'll actually show you an example of that. How we create the tags. So once the launch process is done, now you have to define policies for your VMs, for your workloads. You need to say where these workloads can reside, cannot reside, where they can migrate. That's what we call in step three here as the policies. The very basic policies could be a trust policy which says this VM only runs on a TXT trusted hardware. That's a simple policy. The second policy could be a location policy which says this VM is a finance VM so it only runs on my finance clusters or my finance availability journals. The policy could also be this is a regulated workload so it can only run within continental US. So that's the example of a policy. You need to create those policies so that the orchestrator can know what to do with those policies. And the next thing is the open stack orchestrator essentially getting an attestation of two things. It's going to get an attestation of the trust. It's going to also get an attestation of the geo tag itself. And when I say attestation of a geo tag, there is a good known geo tag you provision on the server. And you do that as an out of band model. And now the server is presenting its geo tag to you saying hey I have this geo tag in me. Attestation process essentially verifies that what the server is asserting is what you expected the server to have. That is the attestation process. So you get the trust attestation, you get the location attestation and if they both meet your requirements the open stack orchestrator is then going to place the workload on the best server that meets those two requirements. If it cannot find a server, the scheduler is going to log in there for you and say it couldn't find a compliant server and then you can take whatever remediation actions. Yes. Typically it would be a technician, an ops guy in the data center. They would be, at least the model that we are espousing right now they would be given a set of tags and a set of UUIDs of all the servers. By multiple, the multiple ways you can get to that. Either the person who is going to do the rack, you know again can I take that offline for you? It's going to be a much longer answer of the actual provisioning process itself. But again just to be clear there is a well defined process that we are recommending. It doesn't mean that that's going to be the process that every service provider is going to use. There needs to be checks and balances and also there are security implications. The tag is going to be only as good or as secure of how you protect the provisioning processes. So this is an overlap of the changes that we are doing to a open stack environment. The ones in yellow are the extensions of the changes that we are doing. So there's a new filter called the location filter that's added to open stack, at least in our development environment. Again it's not committed up into the community yet. There will be some changes to the horizon dashboard so that you can figure out what the tags are, what the selections are. And then the policies on VMs are currently being implemented as properties on images. So you can have trust property, location property, anything. Using the image store registry we are storing them as properties there. The ones in blue are the things that are required in the back end to make all of this work. So there is an attestation service, there is a tag provisioning service, the thing that we were kind of beginning to talk about. And then on each of the servers within the trusted compute base on the server there is a provisioning agent that would run during the provisioning process so that it can actually write the tag into the TPM. So the process would be the provisioning service communicates with the provisioning agent and writes that into the TPM. These are today components that Intel is providing. There would be open source versions of these components. There's already open source versions of the attestation service. If you go to intel.com slash security, it's called open attestation, OAT. The provisioning service also will have an open source component to it so that you can do all of this thing in open stack using all open source components. It's not just us talking about this thing. NIST has a publication called IR7904. IR stands for interagency report. This is what NIST publishes as a special bulletin or an interagency report that all the federal agencies and typically the community picks up as well. This specifically talks about trusted geolocation in the cloud. It talks about using hardware root of trust for ensuring trust, ensuring geolocation. And it has some proof of concept implementations as well of how you can do geotagging. The one thing about this publication I want to mention is it is all built using the VMware ecosystem initially but NIST is looking to update this thing very soon with an open stack version of it as well. So quick walkthrough of the screenshots here. I was hoping I could do a real demo, but wireless is not that reliable for some reason here. So I'm just going to show you the screenshots here. Here is how you define tags. Like I said, the tags are name value pairs. It can have anything, you know, country, state, city. You can have latitude, longitude, anything. They are all packaged together as a selection and you have a bunch of these selections available and then you associate them with individual hosts as you go on through the provisioning process. So we made some changes to the horizon dashboards so that you can actually provision these tags into the servers that are in your cluster. So there's a new button here called provisioning geotags and then you have the list of tags, you pick them and then they're just associated with the server. And the way you know that a given server has geotag or TXT is by these two icons. You know the lock icon shows that it's TXT-enabled and then there's this little indicator thing which shows that there's a geotag that's provisioned for this server. This is where you define policies for VMs. So I have four images here and then I have two policies on each of those. I have a trust policy defined and then I have a geolocation policy defined. And here is the UI to essentially do that in horizon. So you can see a VM policy. I may not care about location at all. All I care about is trust. Then I just pick the trust-only policy. But if I pick a geo-policy, the trust automatically comes with it. Without the trust enforcement, the geo-policy enforcement doesn't make any sense. So I defined the tags for the servers. I created the VM policies. The next thing I would do is let OpenStackDo's thing launch those VMs. The location filter that we added to the OpenStack orchestrate the scheduler looks for the right set of servers that meet the policies and assign the first best server with the least CPU utilization to that specific workload. So now I have instances created here with these two policies. If an instance couldn't be created, it wouldn't show up here and then you can go back and look at either the command line or the horizon interface for what happened. We are able to raise different kinds of syslog alerts when something fails. So you can pick it up in any security monitoring tools so that you can do the necessary remediation at that point. The last screen I want to show you, I can, I guess I didn't put that in. I was going to show you a screen of the trust dashboard itself whereby you can have a bird's-eye view of all your servers, what the trust status is, and when was the last trust information asserted about those servers. I can probably show it to you offline later in that one. So the next thing I want to talk about is this second concept, the capability I talked about, VM protection. The use case is very straightforward here. You want VMs to be encrypted at trust in transit up until execution. Given the limits of technology today, you can't have the VM still in encrypted state while execution. X number of years from now, that might change as well. But up until then, let's say VMs are encrypted at trust in transit up until execution. I'm sorry. Tenant controls the encryption keys. Under no time, the service provider would have access to the key. Excel, there's only one case where they have to decrypt your VM to run it. So in memory, that key exists for a while before it gets flushed out. That's the only time a service provider would see your key. And you have policies where you can determine when you release the keys to the service provider. In the current reference implementation we have, we release the keys to a service provider only if they can assert that the server on which they are running your workload is a trusted server. The service provider actually passes a TPM specific key to the key management system. And the key management system wraps the decryption key with that TPM key. So only that server, that TPM can decrypt that decryption key that you have given nobody else can. And it's great to encrypt and it gets to decrypt, but when you launch it, you want proof that you launched, the service provider launched exactly the image that you wanted to launch. So that's what the last bullet talks about, that we can extend this chain of trust now all the way to be able to the virtual machine, the guest itself. And the fact that it is rooted in TXT, I can trace it back all the way down to the root of trust to our microcode in the CPU. So it becomes a very strong, verifiable, auditable piece of information that's very good from a compliance perspective. So I know I apologize for the picture, it's too small, there's a lot of small font in here, but I wanted to get everything on one slide. So the code name for this thing is Mystery Hill, that's why you see MH every so often here. I tried to do my best to get rid of it, but it's still there. So this is where you encrypt your VMs and you store them in a key management service at your facility. Those keys are packaged up, the VM, the encrypted blobs are uploaded into Glance. And however you launch the VM, either through Horizon or through the API mechanism, when the VM gets is getting ready to launch, that blob of the VM workload gets downloaded from Glance to a server. And we intercept the request and we talk to the key management service saying, here is a key ID that I need the decryption key for. A-I-K, think about it as a TPM key for now. Here is my TPM key and here is the ID of the decryption key that you have. Give me the decryption key. And the key manager says to the attestation system, I got this TPM key. Can you tell me if that server is trusted or not? So that's what this request back here is. This guy knows about the trust of that TPM server. He'll give you an assertion back, hey, this is a trusted server or it's not a trusted server. And then based on that policy, based on the result of that execution, the key manager takes the decryption key, wraps it up in that A-I-K, the TPM key, and sends it back to this DOM zero plugin. And now this plugin uses the TPM to decrypt the wrapping key, get the decryption key out, decrypt the VM itself and then launch it. And if measurement is a requirement, it can measure it, get it attested and then launch it. The whole idea of attestation is that it is totally tied down to a root of trust. The service provider cannot really do anything to the attestation process. It's a standard trusted compute group defined mechanism and it is all tied to a root of trust. So unless the root of trust is compromised, the attestation cannot be compromised. Any other questions? We introduced this concept at the OpenStack Summit in Hong Kong. We have made tremendous progress in actually implementing a lot of the pieces here. We have demonstrated pieces of it at another conference recently. We are hoping that by the November Fall Summit, we're going to have all these pieces in place. We're getting a lot of interest for this use model, for this solution, and we are going to submit a couple of blueprints hoping that we can target the carry list for this. Yes? That's a good question. We decrypted, but then we used the Linux DM crypt on the file system as well before it gets launched. So the actual decryption happens from the decryption key. It gets stored using the file DM crypt process, but before it gets launched, it's still in an encrypted state on the file system. Any other questions? Yes? Yes, it will. Again, I apologize for all these terms here, but open attestation is the open source version of Mountwells. Any other questions? Yes? Yes, so all this is open source. I wish I had a very, very strong answer for you. I don't. We are, we have been talking to folks about it to see what we need to do to ensure that the scheduler extensions that we have done work as they are supposed to, maybe OpenStack itself should run on trusted infrastructure, TXT protected infrastructure. That's a separate discussion altogether. Any other questions? Yeah, you know, I was hoping I would totally skip the VTPM discussion today, but it looks like I can't. This guy here, the Mystery Hill plugin, does what TXT and TPM do for the physical hardware. It has a measuring component and it has a storage component which functions like a VTPM. The reason we are not calling it VTPM today is that it doesn't have the TPM interface to it, the standard, but it still has the same primitives. You can do seal, unseal, attest, all the same primitives. We don't have the VTPM interface implemented on it yet. But without VTPM, we wouldn't be able to measure and attest VMs. It's there. We just don't call it VTPM today. You know, it's running as part of the hypervisor. It's running in case of Zen, it runs in DOM 0 and the DOM 0 itself is part of the trusted compute base for us, meaning that the TXT launch process will measure DOM 0 and it's going to attest DOM 0 before it comes. The hypervisor that's launching the VM image is going to measure the VM and write the measurements into this VTPM, let's say, and then it's going to continue with the launch process. It's an outside-in measurement. It's not inside-out measurement. The guest lawyers would not have to look at it at all in any way. Any attestation of it is done by the hypervisor. The hypervisor is telling you that this VM that's launched is what you wanted me to launch. And here is the proof for it. The guest will never tell you that, hey, I'm launched successfully. And here is the proof. It's the outside-in model. And it's very similar to the way TXT and TPM work. They measure something and the measurements are stored in the TPM and then they launch that one, launch the component. The component, then, if there is a follow-on component to launch, it's going to measure the next component, write the, extend the measurements into the PCRs on the TPM and launch that component. It's always the one that falls. Why do you need a visibility to the measurements of the guest from within the guest? That's the thing that I'm still struggling with. Why do you need those measurements? Why do you as a guest need your measurements? What purpose would that serve you? And what guarantee would the guest have? It's somebody else who wants to know whether the guest that's launched is a trusted guest or not. It's not the guest knowing that it's trusted or not. Can we, do you mind if we take it offline? Alright, yeah. Can you ask, can you say that again one more time, what is untrusted you're saying in the model? At the end of the day, all this chain of trust we are building is to ensure two things. One, the VMs that are going to be launched on the platform are number one, running on a trusted platform. And when we say a platform, it is the hardware, the firmware, the BIOS, the hypervisor, whatever is controlling it. That's number one. As an extension to that, we are saying, can we also measure and attest the VMs themselves so that I know that my VMs are trusted also? There are a lot of other parts that need to be trusted in this for this picture to work. Like, how does the service provider guarantee that this is there? And how do you protect your key management system? You know, security is not one thing. I need to have a bunch of things to work and make sure that they are trusted equally. Our goal is to make sure that what we call the trusted compute base is as small as possible. So we are constantly trying to see which parts of this thing we take away from the trust boundary so that the things that we measure are as small as possible. But the state of technology is that we have to still trust a lot of these things. Can we just hold on that question for a little later? I'm getting the signal that I'm running out of time here. I'm going to do my best on the McAfee stuff. This is supposed to be JK's thing. One of the things McAfee is trying to address is you may have security profiles, security policies in your data center. And as you are going in and out into public clouds, how do you ensure that the same security profiles and postures are applied for your workloads as they burst in and out of these public clouds? So assuming you have McAfee policy orchestrator as your primary mechanism to manage within your data center, they have created some plugins to EPL that can now discover which workloads of yours are actually going into public clouds and what the profiles security policies were internally and to ensure that the same policies are applied when they are externally as well. So that's why they have this concept of discover what your current VMs are in your workloads and what the policies are and then provide some ability to monitor and protect those. From what I understand, these are available. The connectors to all these following public cloud environments are available. There are plugins to the policy orchestrator and then you can manage your policies just like you manage in your enterprise when you burst into one of these public clouds as well. And these are available on McAfee's website and you should be able to at least get a peak of them from there. With that, I want to kind of wrap up a little bit and see if there's time for more questions. You know, our goal is we want to make sure that security is integrated efficiently and across the board so that there is this foundation of trust that we can enable across the domains in the infrastructure. When we talk about trust, we are not talking about the launch time trust. We hope to get to runtime integrity eventually. As an enterprise, as somebody who's hosting in a service provider, our request for you is make sure you demand that visibility and control from your service providers. Ask them that you need to be able to verify the integrity of the infrastructure on which your workloads are running. And if you are a service provider, be transparent. Make sure that you provide the trust and visibility that they need so that the critical audit and compliance requirements that enterprises need can actually be met. I think I am pretty much to the end of my content here. So we got 10 minutes, I think, or maybe 12 minutes for questions. Anything, anybody? I know you had, you were going to ask me a question and then you... Okay. All right, any other questions? All right, then thanks a lot for coming.