 Who cares about security? Awesome. Right. Hey, good afternoon. My name is Raghu Yeluri. And Abhishek couldn't be here for some personal reason. So I'm going to try to be Abhishek as well today. I work in our data center and security products group. And I'm a principal engineer and drive a lot of the infrastructure, security solutions, and architecture development. All right, Intel IT. Yes. But right now, I'm going to take a different role. We have been working in the open stack security space for some time. Awesome. Murphy is in good shape today. I'll give it a few seconds to see if it shows up again. Nobody's trying to hack me here, are they? OK. All right. So let's see. Trusted compute pools, trusted boot, trusted VMs. We've been trying to do that for three, four years in the open stack. The journey is not complete. We have made great strides with trusted boot, trusted compute. We are trying to get that trust up into the VMs, into the workloads. We actually announced a product called Intel Cloud Integrity Technology, CIT, which essentially is a way to extend the hardware trust into the VMs, into the workloads. Today I'm going to slightly change course, and I'm actually going to talk about what we are doing with containers, where Intel's focus is with containers, and what do we mean by trusted Docker containers. I have somebody from Docker here, right in front. His name is Bart. And anything I say incorrectly about Docker, he's going to correct me. Before I jump into what trusted Docker containers are, I'm going to talk in the role that I'm in. I talk to a lot of customers. And we hear very little about Docker security, actually. I'm just kidding. We hear a lot about Docker security. I'm going to show you the top four or five things that they ask us, Intel, the hardware company. I'm going to tell you what Docker containers are, what we are doing, and how we are trying to use hardware root of trust to protect it. I'm going to transition into something interesting. Who verifies the trust? OK. The platform has booted up. Docker is there, but who verifies the trust? And I'm going to show you a reference architecture we built, where OpenStack becomes the control plane for both launching VMs and trusted Docker containers. And then hopefully, if the demo guards, the way my machine is today, I don't know how well I'm going to do demos, but I'll try. I'm going to show you a live demo of it. But I think you should do this slide. Docker in one slide. This is my perspective, by the way. To me, it's a new way of building apps, where you separate the apps from the underlying infrastructure. You think in terms of microservices. And then you can scale by composition. I remember the days of service-oriented architecture, servelets. I think finally it's beginning to come to place now. A lot of customers who deal with containers, who deal with Docker, are in the mode of not building big apps, but small services, and then aggregate, compose, and scale by composition. Underlying building blocks, I don't think I need to mention that to anybody here. The next colonel's namespaces, he groups all the stuff that you all know. Same thing about the Docker components. Docker engine, repositories, either private or public. And the Docker images, the various layers that hold the apps. The interesting thing is orchestration. I mean, the lot of focus has been on Docker, the platform, running tens and hundreds of containers. But the moment you get into enterprise scale, you want to be able to take lots of hosts together, treat them as one singular host, or virtual host, if you will. So that's where orchestration comes. And if you look at all the work that's going on, it's really spectacular. In an open stack, potentially as one way. Docker swarm, Kubernetes, of course, at the head of the pack, mesos, fleet, you name it, lattice, lots and lots of work on orchestration. The reason that's very important for me is it's in that orchestration is where I'm going to plug in trust. Because the moment a container goes onto a server, it's already too late if there's a security problem there. So I want the orchestrator to be able to know about the security of that server before it can go put something on there. No surprises, I'm sure, for many. These are the things we heard over and over again as what Intel needs to help on with Docker security. How do you know that the Docker host actually has integrity? You're assuming that, yeah, the platform is there, the Docker engine is there, and you're going to start launching Docker containers. But how do you know that the platform is good? How does a service provider prove to you that the platform is good? The next one is container integrity. OK, I trust the platform is good. How do I know that the Docker images that are getting launched, the containers that are getting launched? What's the source of the image? Who wrote it? How do you know that the right image is getting launched? Can somebody prove to you that the right image is getting launched? Or is one of the layers not what you expected it to be? It still works, but the layer is tampered. Something else is there in the layer. Of course, runtime protection. The fact that containers are so popular, they share the kernel. If there is a leak from a container down into the host, in most cases, it's game over. How do you do runtime protection? Intel the position where we sit in the stack. We may have some advantage to actually look at the problem from a different angle. So we can do some things there. The next one is interesting. This is what Gartner keeps talking about all the time. Enterprise readiness of Docker, compliance, manageability, identity authentication. It's not about running an Apache server in a Docker container. Well done, well understood. But as you start building mission-critical apps, telco guys building NFV apps in containers, how do you do compliance, how do you do manageability? Containers are talking to each other in this world of composition by scale. How do you know that the right container is talking to the right container? How do you do authentication at that speed? Major problems. And then the last one is interesting. We heard many times people want a single control plane for VMs as well as containers. The de facto one people talk about is Docker, of course. But I'm assuming the problem is going to be for everything. Intel's focus, we want to make sure there is integrity assurance on the hardware and the OS platforms on which containers are running. That's it. That's the singular focus we have. And what do we mean by Docker containers? This is what we want to do. I'm not going to say we have all of it today. We want to make a specific claim that the integrity of the Docker host at launch is there. There is runtime integrity of the Docker host. And then the more interesting thing would be, I bet for Docker guys it's the same thing, the integrity of the Docker images and containers. That is a huge space, a lot of opportunity, a lot of options. Today I'm going to focus on one. And I may talk a little bit about number two this time. But today's focus is what are we doing to ensure the Docker host has integrity? How many of you know Intel's TXT? Awesome. Not surprisingly, we are trying to follow the same model that we did for VMs onto the Docker side as well, with a couple of differences. Number one, we are going to make the Docker daemon part of the trusted compute base. In addition to asserting that the BIOS, the firmware, the OS are trusted when the platform comes up, we will assert that the Docker daemon is not tampered as well. So at launch time, you're for sure launching with a clean platform. Then the runtime protections have to come in, but at least you know that you started off with a clean platform. So the Docker daemon is solidly within the trusted compute base of what we're going to do. So the chain of trust, like I said, goes from hardware to the firmware all the way into Docker Engine. A compute host can't say that I'm trusted. Somebody else has to tell that server is trusted. So that's where the remote attestation process comes in. We have something called an attestation authority that will definitely give you an assertion that a server or a set of servers are trusted for Docker. The next logical thing would be to go one level higher, which is how do you ensure that the Docker images are not tampered? Here there are multiple options. I'm not going to get into details today, but we can follow the same model that we do for the base platform, which is measuring everything. There could be other options, like signatures. But then signatures have an interesting side effect. How do you verify the entity that signed that image? What's the root of it all? And how does the machine know what the root certificate is? Where do you put the root certificate? What happens if the certificate needs to be revoked? So there's some certificate manageability issues there. Measurements have its own problems. One of the most compelling feature of containers is the speed at which they launch. If you have to measure all images, all layers, you're going to take a performance hit. So there's some design considerations there. This topic is definitely for another day. But I just wanted to kind of put it on. Like I said, for small, simple containers, this is good. But how about PCI, DSS related containers, or HIPAA related containers? Or if somebody is from the federal government, how about federal apps? Strong compliance requirements, strong boundary requirements. So the things that we have done for VMs, what we call boundary control, where you can control where your workloads can or cannot run, either by geolocation or quality of service, SLAs, we're going to move the same methodology for containers as well. Again, it's not for every container. I'm going to be very clear. There will be some containers that would need this. And for those, we're going to provide the same boundary control that we did for VMs. And the last one is interesting. This is we're still in exploration stage, but will there be containers that will have personally identifiable information, secrets, keys, that if the images are sitting in repositories, are you exposing yourself for regulatory problems? Other problems? So in those cases, maybe you want to encrypt the container images and let them sit in repositories as encrypted ones and then have a mechanism to release the keys knowing that this container is coming on a trusted piece of hardware. And the reason I say that we are exploring this is technically we are there, but speed is a problem. The fact that you need to do all this to launch a container instead of one millisecond, it may take 5x more, 10x more, depends on what kind of network you are on. So that's why technically we know what we need to do, but there are other challenges on that one. I keep saying we're going to provide this root of trust and chain of trust. So this is what we really mean. At the bottom is the Intel TXT platform. That is instructions in our hardware that provide you a mechanism to measure and launch everything. The first thing we do is we verify our own firmware. We provide some firmware to the platform. We will make sure that that firmware is not tampered with. There are certain keys that come with the firmware. They are hashed and fused onto our chipsets, so we ensure that Intel firmware is safe to begin with. That is the root of trust. If we know that it is tampered, then the machine doesn't even boot up. The next thing is the BIOS. So we measure the BIOS. What do I mean by measurement? I don't know if you guys can see the text here. Measurement is essentially taking the load time creation of the component's identity. In other words, it could be the hash of the component, the image itself. So we take the BIOS image, take a hash of it. View details, how about that? The next thing is the bootloader, the pre-kernel, which is the Tboot, which actually uses Intel's TXT instructions to do the measurement of the rest of the system. And we measure the OS kernel. Then we measure init-rd. And we actually extended init-rd with something called tboot-xm-extended measurements so that we can measure the Docker daemon. There will be a mechanism that we will provide so that for different kernel versions, you have a build tool for init-rd so that the so-called measurement agent that we bring gets packaged into that one. Then that will measure the Docker daemon. And once the machine is launched, you can see that in the Docker daemon that we are adding some capabilities that will eventually do the rest of the Docker container stuff. So at this point, given the context of this session, think about it that we got from the hardware up to the Docker daemon, we are measuring everything. Where do these measurements go? I apologize for the text. I had to put everything in one slide. So the measurement on a TXT platform happens in two phases. There is the measurement of the hardware and the BIOS first. And these measurements get all written up into registers in the TPM. The PCR stands for Platform Configuration Register. There is a spec for it from TCG. You know what PCR0 means. You know what PCR1 is because all the different parts of the boot sequence go into those things. The second phase of the measurement is where the OS and the Docker daemon, all those components get measured. The way we build this thing, we are not tied down to just measuring the Docker image. In this boot process, if you want to measure other things, Java runtimes, Python configuration files, you can measure anything you want, as long as you provide a manifest to the tboot.xm, and at boot time, it's going to measure them and extend them into the TPM registers. This way, we are not tied to what we think are the right things to measure. We give that control to any DevOps person who wants to configure it differently. At the end of it, everything is measured, everything is extended into the TPMs, and the system launches. But there is no guarantee that all the pieces are what you expected them to be. All TXT did at this time was it measured everything, and it guaranteed that when the measurement process is going on, there is no other thread running on the system. It's only one CPU thread that runs, and the measurements happen in the CPU heap. So we can guarantee with 101% confidence that nothing else is running on the system. So the measurement process is fully integrity protected. But if the Docker daemon, for example, is not the compliant version that you guys have, it doesn't have to be tampered from all where. What if it is not the version that you deployed in your environment, somebody accidentally brought in a wrong version of Docker daemon? That's where the verification of this comes in. And who does that verification? Typically, it would be a scheduler, a cluster manager, policy manager, orchestrator, whatever you want to call it. And the model is very straightforward. There will be a filter, what we call a trust filter, that plugs into the schedulers, the cluster managers, policy managers. And in the environment, there would be something called an attestation authority. It could be cloud integrity technology. It could be anything else. But we provide one. It's going to be available for VMs. It will be available for containers also. And the attestation authority has a singular purpose, which is to be able to provide you that assertion of trust, that server has booted correctly. It booted the things you wanted to boot. And the Docker daemon and the Docker platform is the right one. If anything changes, you will know that the attestation authority is going to tell you that, hey, either the bios is out of compliance, the firmware is wrong, or the OS is wrong, Docker daemon is wrong. Could be anything. So the principal operation is pretty much set on the screen. Cluster manager initially determines a subset of hosts based on whatever rules it follows. Maybe utilization of CPU, memory, location. From that subset, it's going to ask the attestation authority to get the trust information about the servers. You get a bunch of signed reports. That's what the trust filter processes. And then based on that, the first best server on which the container should go gets picked and it gets launched. So the cluster manager could be Kubernetes. It could be Docker swarm. And I'm going to demonstrate to you that we have OpenStack scheduler, which does the same thing. So this is the reference architecture we built, where we have pools of servers with KVM and Docker. We have extensions to Nova scheduler. In fact, these are the same extensions that are already in mainstream since Folsom, the trust filters. We had to make some changes, more than changes, some configuration things, I would say. And I'm going to walk through those a little bit here. The first change is for every image, we had to add a hypervisor type. If it is a Docker, the type is Docker. If it's KVM, QMU, it's QMU. And we had to enable the image property filter in Nova scheduler. So the moment you set the policy, what I call the image type in the VM image, when the Nova scheduler wants to run this image, the first thing it does, it runs this image property filter. So based on the image type in the image, it's going to find the subset of servers that match that hypervisor type. So if the image type is Docker for me, the top two servers are going to be picked by the image property filter. The bottom two servers are automatically ignored by that filter. And then, as you can see, step three, Nova scheduler now runs the trust filter. Ignore the location one for now, but it runs the trust filter. The trust filter is going to take that subset, those two servers, and it's going to use the attestation authority to get the trust information. At least in the picture you see on the right, the Docker engine, you see an X mark there, meaning that something is out of compliance with the Docker engine. So the trust filter will come back and say, hey, I have one server only that is trusted for Docker. And then the scheduler automatically schedules the Docker image on that server. And the trust authority, the attestation authority, gave you a signed SAML report about the trustability of the platform. It's going to tell you, hey, the Intel hardware is good, the firmware is good, BIOS is good, OS is good, and the Docker daemon is good. And they all booted up correctly. If the image type is KVM or QMU, exactly the same process happens. No manual steps, nothing here. It's all automated. I'm going to come back to this, but let's see how good my luck is with demo today. I hope the backup screen doesn't come up again. All right. Yes, works. I'm going to start with the attestation authority first. I keep calling it attestation authority because that name is too long for me to keep saying cloud integrity technology. This is the attestation authority's view of the compute pole. This is an out of band. If you remember the picture, there's an out of band way for us to get to those servers. So I have two servers that are Docker servers and two KVM servers. I have one where the BIOS is trusted and one, it's hard when you do these last minute demos. We didn't have time to change the VMM to Docker based on what server you are. So that's why the column shows VMM. So for a Docker server, read that as Docker. So it shows that something is wrong with the Docker image. So let me get to the details of it. So remember I talked to you about the PCR values, what's in the TPM, what got measured. These are all the things that got measured. PCR0 is the BIOS. So you have a white list, which is what you expected the value to be. And you have the current value, which is the PCR value. So this is what TXT measured. When that server was booting up, that's the BIOS it measured for that machine. PCR17 is our firmware. PCR18 is the OS and the OS kernel. And PCR19 is where you see init.rd and you see the Docker demon. So you can see the white list has a different value than what we measured for the Docker demon. Again, like I said, I'm trying to emphasize this. It doesn't have to be a tamper Docker image, the Docker demon. It just could be a different version. The white list says this version is what is a compliant, accepted version in my data center. Somebody accidentally downloaded the wrong version of Docker image on that machine. I'm Docker demon on that machine. How it happens, I'm not a system admin, but I've talked to enough of them. Things happen in data centers. That's what they say. So when you look at a KVM server that is trusted, you don't see that. So you see all the servers. Everything is matching what's in the white list. So we know that the server is trusted. Now when I switch over to the open stack side, the interest of time here, when I go to the admin view, I see those four servers, two KVMs, and the two Docker servers. I see the green to show that it is trusted. One is not trusted. I'm working on a smaller resolution screen here. So a little hard time with the mouse, so apologize. So when I come to images, remember I showed you I had to turn on a policy hypervisor type for Docker. So here you see a web sphere liberty image, the raw format. If I click on the image and I go all the way to custom properties, I see the hypervisor type as Docker. And I said, hey, I want trust to be true, meaning that this is a critical Docker image for me. I want it to run on trusted platform. That's all at the image level. That's all you have to do. Set the hypervisor type, set the trust policy to true. And that will automatically trigger the scheduler to pick the right server and launch it. If I show you a KVM image, you will see something very similar. The format is there. Trust is there. You see a bunch of other attributes that we are doing for VMs, which is the ability to encrypt VM and release the keys later, do some other VM level stuff which we don't have in Docker right now. So now if I launch this VM, this Docker image, it's pretty much going to go through and launch on that Docker server one, which is the trusted server. If it didn't find any servers, it wouldn't launch. And then the complementary side of this would be what if the image itself is tampered? Like I said at the beginning, there are multiple ways we can determine that. We can measure everything. We can sign it. We are exploring all options. We'll see which one makes the most sense for it. So that's the demo part. So let me get back to kind of summarizing the changes that we had to do. The first one was the hypervisor type. I talked about that. You have to activate the image properties filter, activate the trust filter, which actually trust filter is already active right now in most of all the OpenStack since Folsom release. And then you have to configure Nova compute on the host side to use the Docker driver. I kind of put the thing you need to put there in the configuration file, compute driver equals Nova Docker. But actually, there are a lot of detailed instructions and the link on the OpenStack Docker link. On the infrastructure side, I kind of touched upon it, but I will get into a little more detail here. You need to have TXT and TPM hardware. Most Intel servers that you get from your OEMs today will have TXT, capable servers, capable CPUs. But the thing you may not have is the TPM module. For a lot of reasons I can't get into right now, server vendors do not automatically ship servers with TPM modules. You have to ask them or if they have a portal, you just say, hey, I want TPM. And then they'll ship it for you as part of the board. Before the question comes up, this is TPM 1.2. We will go to TPM 2.2 as it's available. We have a lot of work going on with that as well. And then there is the attestation server. You get attestation server from Intel. There's no cost to it. You get it as a fully functioning virtual appliance. If there are some ISPs and service providers who integrate it and they say, hey, we don't need to deal with the code, we trust you guys. You give us a binary appliance with REST APIs, and we'll just call them. But if you want code, there is an open source version of it as well. It won't be fully as functional as an appliances, but there is an open source version for you to dig through the code. And if you want to make changes to it and integrate that, you can do that as well. I think I'm kind of getting close to the time here. At Intel, like I said up front, our focus is making sure that the platform on which you're running Docker containers has integrity assurance. At some point, that chain of trust will go higher, like we have done with VMs, but we are not there. Intel TXT and attestation server, I think I already mentioned, there will be some changes, even if it is as simple as configuration changes in OpenStack that we need to do. We will provide those scripts, those automated scripts for you guys to go do it. Our goal is to upstream them, but I'm sure there are some, I see some developers, OpenStack developers in here. You know how hard it is to upstream anything into NOVA. So we are going through the same pain. It will take some time to upstream these things into NOVA. Other projects easy, but NOVA will take some time. If you want to try out with VMs today, let us know. It's there. We'll give you, you know, you have the product guides, installation guides, all that stuff. You can be up and running in a few hours. The reference architecture I showed you will work within a few hours. I can tell you that with a lot of certainty. And in end of Q3, we will provide the Docker support, Docker platform support as well. So you can do both Docker containers and toasted VMs from within the control plane of OpenStack by then. I think that's all I wanted to share as far as content goes. Questions? We have a few minutes. Yes. You know, one of the worst things that I can do for a system admin is to make a machine a brick. If you want it, we can do it. So we let the machine come up. But the remote attestation is what's going to tell you for sure whether it is a trusted machine or not. Some service providers use what is called a remediation network, where these machines actually come up first on that network. And once the attestation system says it's trusted, automatically gets moved to the production, start taking the load. Absolutely. Imagine that, yeah. Question? You know, I had one conversation with Magnum person. I think the way we are doing this, it should work very well with Magnum. Because I did a reasonably sufficient amount of work to see how it works with Kubernetes and with Docker's form as well. So I'm pretty sure it shouldn't be that difficult with Magnum. I'll measure the VM. And then in the VM, you can tell me what things in the VM should be your TCB. And if one of them is Docker demon, we'll measure that as well. Yeah. On the first one, have you seen the announcement yesterday, I think, on Intel Clear containers? Intel Clear Containers is an open source project that is addressing the notion of isolation using Intel's VTX, the virtualization technology. So think about it as a lightweight VM in which the containers will run. It is. But it depends on Intel Clear Linux. The reason I had that much pause was, I want to be sure I tell you the right thing there. And your second question about image and scaling, that's the reason why I haven't said much about Docker image integrity protection in this session. I left it at the Docker host registry, because exactly what you said, many versions, many layers, has to scale significantly. Any other questions? Hello, Justin. Yeah, I had a question. OK, last question, please. Yeah, last question. I got the cards. Yeah, no worries. So I'm JJ from Kizmatic. We're super excited about the Clear Linux operating system distro as well. We've got that in our GitHub repo where the Kubernetes company. I had a question on support for additional hypervisors. I know there's the KVM support today on Clear Linux. So are you thinking of Zen or other hypervisors? You know what? I had to defer that to the Clear Linux guys. I'm not that familiar with it. But if there's somebody else from a Clear Linux team here from Intel, can answer? You have plans to support Zen. Oh, you do have plans to support Zen. Awesome. Thanks. OK, perfect. Any other question? If not, thank you very much. Oh, oh, hang on, hang on. The real stuff, OK? We have a basis watch here that somebody is going to get. Who wants it? Right, let me see if this is signed and attested. 621-3261, all right. You don't work for Intel, right? Awesome. All right, thanks, everybody. Thanks a lot.