 Okay, so you have been hearing over the past three days a lot about containers and so one of the things that we are talking today about is security. So there was recently a couple of months back a survey by devops.com and they asked people what is the number one barrier for adoption of containers in a production environment. Any guesses for the answer? Security. So my name is Abhishek Gupta, I am a cloud security architect at Intel. My co-presenters Raghu Yaluri and Arhan, both of them principal engineer at Intel, they could not show up for some reason. So I am going to be three persons in one today. So the agenda for today, I was planning to give an overview of containers and talk about container security. What are the top customers asked that we have seen talking to various customers and also various architects across different companies and what is Intel's focus in this space? What are we doing for enabling trusted and secure containers? Who verifies trust? That is an important question and we are going to show a reference architecture with OpenStack and I am going to show a live demo for the same, the work that we have done with respect to OpenStack and containers. And then finally I am going to talk about hardware-assisted runtime integrity and also the runtime isolation. If you were in the last session, there was a mention of clear containers so that is where I am going to talk about the hardware-assisted isolation and then finally we will summarize and end with a call to action. So containers starting from the previous session, I am sure many of you might be there and there was a lot of good mention about what are the different technologies, names, PCC groups and so on. And there have been various talks, previous three days. So I am going to keep it short and concise, not bore you with the same thing again and again. So containers are essentially a very lightweight, light overhead virtualization mechanism using operating system virtualization. So primary technologies that I use are namespaces and C groups and they essentially look like VMs from outside. You can SSI as you can ping. The only critical feature is that all the operating system containers, they share the OS kernel. So unlike VMs, in the case of VMs, you have the whole guest operating system running inside a VM. But in the case of containers, you are sharing that OS and only the app or the associated libraries would run inside an execution container. Now the containers have existed for 10 plus years as also mentioned in some of the previous talks. What has really driven the momentum for containers is Docker and similar other management systems for containers. And what Docker essentially provides, the innovation is that it provides a way to package and deploy applications containers. So you take the application container, you can take it from one place, you can move it around, you can run it to another place. And another, that is the concept of Docker images. And another innovation with Docker images which makes Docker very unique is that they have used the union file system concept in Linux to make sure that an image consists of different layers so that you are only adding a diff to the new image. So if you have a base image you want to like shown here and if you are adding another image, say Apache, you are only installing certain packages which forms a diff or the layer. So the base package or the base layer can be shared across multiple containers which make sure that containers launch very quickly within milliseconds. So various orchestration mechanisms are also there for containers, OpenStack, Docker, Swarm, Kubernetes, MeSource, Chorus, Tectonic, Fleet. Various of various orchestration mechanisms are there. So how security comes into picture, the major major differentiation with respect to VM for containers as I mentioned is that VMs have their own guest operating system whereas in the case of containers they are sharing the OS kernel. Now one of the possible attack is that you have an adversity in the host operating system. So the host operating system will also have the Linux kernel but there could be also other applications which are running along with other containers. So if the host operating system is compromised or it's tampered, there could be an adversary which can break into the container and it can try leaking of the data or it can even tamper the container. All kinds of attacks are possible. So this is one of the attack where the adversary is in the host. Another which is the more common or the more important thread that comes into picture considering the difference between containers and VM is if there is an adversary in container it can potentially break out of the container and affect either the host operating system or host kernel or some other container. Now these attacks are possible even in the case of VMs but they are more difficult in the case of VMs as compared to containers because in the case of VMs you have to break in more number of layers of security because you have to first from the application you have to break into the guest operating system and from the guest operating system you have to break into the hypervisor whereas in the case of containers you can you just need one security breach which is you have to break from outside the container. So summarizing the text that I described and we also ran these through various customers and also we talked to Docker security team mathematically. Docker host integrity that is the integrity of the host that the host operating host operating system and the Docker daemon the whole platform is not tempered that is one of the security opportunity of the customers. The second is once you make sure that the host platform itself is its integrity is assured you need to assure the integrity of the containers themselves that is the container of the application which is running on a platform that itself is not tempered. So finally then other customer ask are that the runtime integrity of the containers and the host operating system and also the runtime isolation. So if you're running the containers in a multi-tenant environment what happens is how do you make sure that these containers are isolated and the attack I just described in the previous slide how do you prevent such kind of attack. Finally another ask is if you have a cloud environment where you want to run both VMs and containers how do you run them in the same control plane how do you have an orchestration mechanism such as open stack which can manage both containers and VMs. So today's focus is going to be hardware based integrity assurance for containers including the platform and the app that is running inside the container. So the four focus area I just described launch integrity of the host or the platform, the integrity of the containers that are running on the top of that and then moving on to the runtime integrity of the host and also the isolation of the containers. Any burning questions at this point of time with respect to the attacks? Okay. So trusted containers what we have done is we have taken the same models that we have applied to VMs trusted VMs and applied similar model to containers. So the idea is that the first step is to assure the integrity of the host platform which consists of in this case the hardware the host operating system and the docker demo. So this is achieved through a technology called Intel cloud integrity technology which builds upon the hardware consisting of Intel TXC which is a trusted execution technology and TPM. So Intel TXC the way it works is that you have a hardware root of trust. So you start from the hardware root of trust and you establish a chain of trust starting from the hardware root up the stack from hardware root to BIOS to the OS kernel and we have extended that further from the OS kernel to operating system services and packages just as a docker demo in this case. So what happens is as the system boots up during the system boot up itself we start from the hardware root and it calculates the hash of the next component of the stack. So it takes the cryptographic hash which this process is also called as measurement. So you take those hashes so for example the hardware root will take the hash of the BIOS and these hashes are stored in the TPM which is a trusted platform module which is like a secure chip. In the TPMs PCR which are registers in the TPM. So this is all in hardware so as the system is booting up you take the measurements of all these components store them in TPM PCR registers. So once you have this hashes stored in the TPM registers and you know that at a particular time the system is in good known state you can take these measurements and treat them as your golden values or white list. So these are your golden values you know that the system is not tempered at this point of time. Now every time after that whenever the system boots up these measurements happen and these values are stored again in the TPM. So the values at the next boot time they can be compared against with the good known values and the good known values are stored it can be stored in a out-of-band fashion not on the same server but stored in a remote attestation authority which is like a cloud integrity technology server. So this remote attestation authority can in an out-of-band fashion enforce that all these different servers are trusted. So in this case the trusted computing base of the trust boundary it comprises of the red dotted line shown here and includes a docker demand. So the docker demand itself is also measured and if it's say tempered or a wrong version of docker demand is running we can actually detect using the cloud integrity technology. The remote attestation server can see that this docker demand has tempered and it can flag the server as untested in that case and when the server is untested you can also initiate various actions such as if the server becomes untested you can migrate the container of the VM from an untrusted server to a trusted server in a data center. So once you have the integrity of the platform which includes the docker demand the next step is to assure the integrity of the containers which are running on the top of that. So in this case we evaluated two models the first model is to measure and verify the docker images so we extend the same concepts which we have done using Intel TXG and cloud integrity technology further to the containers. So the way we do it is we insert some plugins into the docker demand. So the docker demand is modified so that every time a container launch happens it intercepts that request and performs the measurement or the hashing of the container image or the docker image in this case and then compares that compares those hashes with the good known values. So the good known values are created in an offline fashion and they also come up with the docker image. So at the time of launch you basically perform this measurement and compare against the good known values if the image has been tempered you can stop the container from being launched. So the second model that is also being developed is the signature based model. So if you've heard about docker notary or docker content trust the idea is there is that every time someone creates a docker image you can associate a signature with that and that signature is verified when the container is fetched from the remote docker hub. So we want to tie up that signature mechanism with hardware root so that it's more secure. So we're tying it up with inter-tested execution technology and storing the root certificate into the TPM. Similarly the boundary control on the geotagging policies they also apply equally to docker containers. So what we have done in the case of VMs and docker containers is that you can make sure that your container or the VM always run into your specified geographical location. So you can say I want to run my container only inside US or Asia or various other regions. So I just described running docker containers on bare metal or non-virtualized environment but there is another model that you can also run container inside a VM. So how does this just attestation and integrity work in that model? Essentially we'll use the same concepts. So in this case when you're launching the VM and the VM could include the docker demo we'll measure the whole VM or we can also specify various different files which could be the paths inside the VM such as the path of the docker demo binary. So we can say user bin docker and that itself would be hashed and compared against the golden value as the container is as the VM is being launched. So some more details about what is measured for containers or what are the different components which are hashed. So start from Intel TXT. We measure the bios, the bootloader, init rd, docker demo and the various layers of the container. Now once we have this whole mechanism to do these measurements, calculate these hashes and store them in TPM registers along with the remote attestation authority how do we practically use it and tie it up with a scheduling mechanism. So how do we actually enforce that if you have some workload they always get launched on a trusted node. So to answer that we have the scheduler of the cluster manager intelligence. So essentially you can have different servers in your data center or a cloud and the scheduler would have the intelligence to decide which one to pick based on this trust and security properties. So various scheduler orchestration mechanisms such as OpenStack, Kubernetes, MeSource, they all can use the same model. There's some problem with that. So the idea here is that the scheduler would have the trust filter or the security filter and then this would, when a workload launch request comes, this would contact the attestation authority and the attestation authority would have the trust view and the trust attestation status of all the different servers in the data center. So it will return that information and based on that information, the scheduler would make the decision where to run the workload. So we tie the intelligent scheduler with the attestation authority to have these efficient secure just based scheduling. So I'm going to talk about the reference architecture implementation that we did with a trusted Docker container and VMs with OpenStack. So in this case, you can have VMs and Docker on the same cloud. So the problem that we're addressing is that you can in the data center of the cloud, you can have some servers which are capable of running VMs such as KVM, some which are capable of running Docker and out of those some may be trusted at some point or at some point of time, some may be untrusted. How do you make sure that your workload always ends up in right server? So in this case, we are showing two servers, the bottom two, which has KVM, KMU and the top two, which are running Docker engine. And then you have the NOVA scheduler. So a user would typically go to the horizon dashboard and specify his workload. So the additional step that we have done is during the initiation of this launch, we also tag the image with certain properties. So one of the properties is we add is the hypervisor type. So we specify whether that image is a Docker image or it's a KVM image. The second chain that we have done is we add trust property to the image, which says that this image always needs to be run on a trusted server. Once we once you have that information in the image and image properties filter gets activated. So there is an image properties filter. The job of that image properties filter is to look at the properties of the image on one hand, and then the properties of the host on one on another hand, and then compare the two. So if there is a match between the properties of the host and properties of the image, it can select that particular host. So in this case, we use the hypervisor type property and make sure that it matches with the host that we're trying to run it on. The second problem is to select trusted host. So for that, we have the trust filter and the location filter. The location filter would select a host with an appropriate location. So if you ask container to always run on a say America based geographical location, it would always run there. Or an Asia based, it would always run on an Asian region. So the trust filter would talk to the Testation Authority. The Testation Authority would get the information of all the different servers. It would actually fetch the information from the TPM registers to its database, and then pass that information to the scheduler. And then the scheduler makes the decision based on that. And in this case, it runs that workload, fetches the workload, container workload from Glance and runs it. So another change that we did to enable this reference architecture was particularly on the compute node. We have to have a Nova Docker driver. And the job of the Nova Docker driver is to talk to the open stack Glance based repository on one hand. And the Docker specific file system on another hand. So it takes the image from the Glance format and transforms it into the Docker file system, the layering based format. So in summary, the changes in open stack for this reference architecture, one is that we need to add the hypervisor type property to the image. Second, we need to enable the trust filter and the image properties filter. And finally, on the compute node, we have to use the Nova Docker driver to enable this reference architecture. Now, this was all about making sure that a workload always runs on a trusted compute node. But once you select the trusted compute node, we all also enable that the workload itself or the image Docker image itself is not tampered. So for that, we have to change Docker demon. So we modify Docker demon to intercept the container launch and make sure that the measurement happens and the verification happens at that point of time. From infrastructure related changes can just use standard Intel TXC TPM hardware. So I'm going to show a demo of the same reference architecture that I just talked. Hopefully everything will work. You'll never know about demos. So this is showing the trust attestation view of the various servers in the data center. In this case, we have four servers with two servers being KBM servers and two servers being Docker server. So these different columns with green and red, they show the trust status of these different servers. So we can have different components of just bio stress operating system trust and overall trust. So in this case, we see that one of the Docker server is trusted and one of them is not. So we can actually go into the one which is not trusted and see what is the problem. So here we see that the TPM PCR values, which are the hashes of the different components in this case. So PCR zero, for example, would have the hash of the bios. So the rightmost column is the whitelist value and the middle column is the value at this particular point of time. And we see that in this case, the red value, so there's a mismatch between the hash of the Docker demo in this particular case, which shows that either the Docker demo is tampered or something is wrong with the system and you cannot trust this system anymore. So once you have this information, switching screen to open stack horizon dashboard, here I'm showing the hypervisor screen here. So we see that the two of the servers are green, which means they are trusted and two are red. So we see that this Docker server two, which I just showed, this is shown as red here. So we have this information and now moving to images. So I'm showing different images here. Some of the images, maybe it's too small. Can you see? So website liberty is an example of the Docker image. So what we did was we had to do a Docker save command and then use that to push that image into open stack glance. So in the same glance, we also have some KVM images, Cirrus. So we look into the properties that we've changed. So in the website liberty, we added the hypervisor type as Docker and also added the trust as true property for this particular image. So I'm going to now launch this particular image website liberty and let's see where it runs. So this instance has started running and we can see that it selected Docker server one as the host. So it always selects the trusted host in this case, eliminating the untrusted host. What is also happening behind the scenes is it is also measuring the image, the Docker image itself when it's launching the image and it would prevent the launch of an image if it's tempered. So we'll try launching another image which is a tempered image that we have stored here. This is the tempered image. So we launch this image. It initially says success because it's able to find a host for that. But when it actually goes to the host and performs the cryptographic hash measurement and verification, it doesn't launch. So yeah, so that's all it is for the demo. Any questions? So this was all about the Intel Cloud Integrity Technology and the trusted compute pools and also the trusted containers, trusted applications. So moving on to some of the runtime integrity and the runtime isolation work that Intel is doing in this space and you heard in the last session about clear containers, I'll also come to that. So the hardware assisted runtime integrity work that we are doing is called Intel kernel guard technology. So the idea with Intel kernel guard technology, IKGT, is it's a policy specification and enforcement framework where you can specify certain policies and using hardware assistance it would enforce those policies to protect certain kernel assets and also certain platform assets. So it can protect certain kernel assets such as kernel page table, some platform registers. So you can have a policy which says that if anybody tries to temper with this particular asset, you need to have some action associated with that. So you can either log that event or you can prevent that event from happening. So this in a way extend the launch time integrity to the runtime because during runtime you're protecting the kernel. And this is based on a thin Intel VT virtualization technology based hypervisor layer which is called Xmon. So the idea with Xmon is essentially it runs at a higher privileged level as compared to the operating system. So it deprivileges the OS and since it's running at a higher privileged level, it can monitor the events which are happening and even prevent them from happening. So it allows specification using config FS and the policy can describe which assets to be monitored and which actions to be taken when certain action happen. This is an open source technology which is available at o1.org. So if you want to play with it, I'll have the link at the end of the slide. So some more details about the IKGT framework is that you can have the policy data in a JSON framework and can be passed on to the config FS. And there's an IKGT driver which is running and then it would eventually pass that policy to the Xmon and then store it into the policy table. So the last topic I want to cover with respect to the hardware assisted security for containers is the runtime isolation work that we are doing with respect to Intel Clear Container. So Intel Clear Container as you may have heard in the opening day keynote session it is thought of as one of the biggest innovation that Intel has done with respect to containers even though it's it can be thought of as bug fixing but still it's a very powerful technology which can have runtime isolation of containers. So the idea is to use Intel VT instead of namespaces for isolation in containers which provides an extra hardware based security layer and you will you can still support the same container deployment model so we'll still get the advantages of the container models such as Docker so you will still package your application using Docker but instead of running them in a operating system based virtualization you can run them in a very lightweight, thin VM based isolation which is assisted by hardware. So we'll use Intel VT which will provide the isolation and which will provide the extra security between the two containers. This is addressing the threat that I talked about where there is an adversary in one of the containers and it is trying to break out into another container. So the trusted computing work that we've done would also address this attack in the sense that it would prevent the tampering to happen in the first place at the time of launch of container but if the tampering happens somehow this would limit the attack inside that particular container and not expose it to other containers. So this is just showing that these containers are now protected by this extra Intel VTX and we are working with, we have already integrated it with Rocket. So in the Rocket you can have different stages so there's a stage one which would use the default namespace whereas you can also have another stage which can use this VT based containers. So summarizing what we've talked about the key takeaways, the threat that we have addressed, the integrity, the tampering of the platform or the host, the tampering of the container itself and also breaking out from one container outside to another container or to the platform itself. And the technology is that we are shown, we have shown to prevent some of these attacks Intel Cloud Integrated Technology, Intel Kernel Guard Technology and Intel Clear Containers. So Intel CIT or the Cloud Integrated Technology, the essential idea is to use Intel Justice Exhibition Technology but expose it from the hardware all the way to the application because Intel TXG the problem has been that it's kind of as you have heard in a previous session this morning 1150 where IBM was talking about the challenges that they had when they were trying to use Intel TXG and TPM and the famous quote I like there was that TPM has been a novel feature which have existed for 20 years. So the TPM has existed for a long time but to actually expose it all the way to the application that has been a significant driver for that option of just computing. OpenStack and launch VMs and containers using this architecture that we presented and introducing the just compute pools. So here are some of the links to get started on these things and we also demoed this but since this is the last session of the day the demos I've already closed here. So I'll open for a question and answer this point of time. Sorry. Yeah. Sure. Take a picture. There is a small very lightweight clear Linux which would be there inside every container. But that's a good question that's hard. That depends on how you define a container versus VM. You can think of it as a container if you think about say the launch time being in milliseconds and so on. If you think in terms of having operating system within that particular sandbox you can think of it as a VM. Sure. Thanks Malini. So to give some quantification number I think the size of this kernel is correct me if I'm wrong it's like 18 to 20 MB or so. So it's very small. It's not like a full fledged kernel. Thanks. Okay. Any more questions? Yeah. Yes. It is in servers now and there is also we have actually enhanced the one of the problems which has been with TXD is the activation of TXD which has been a pain. So we have enhanced the activation tools for TXD which are available on Intel.com slash TXD. So which makes the provisioning of TXD very easy as compared to what it has been previously. So it's already there in the servers. So you just need to activate it if it's not. So for the launch I think we definitely have the numbers which is in I mean compared to said Docker it's still maybe hundred millisecond more but it's still in milliseconds the launch time of containers. Can you complete? Sure. I think we can continue the discussion offline. You're running out of time. So thank you everyone for coming to this session even though it was last session for the day. Thank you.