 Hi everybody, thanks for coming this afternoon. I know it's been a long day, but we have one more great session for you For those of you who we may have missed at the door we are doing a drawing at the end of the session for an iPad mini 3 so if you Don't have one of these wonderful little cards throw up your hands now or forever hold your peace and Just one more time this is session seven There we go. There we go. Okay. Now. We're ready to go Do me a favor it's like being at a baseball game pass the hot dog down to the guy in the seat next to you We've got more over here. Did we run short? Okay, we'll get it. We'll get you Laura Laura will get you in the back there Again, thanks for coming come to the Cisco breakout room this afternoon. This is our last session of the day My name is Gary. I'll be your host Seriously, I'm very quickly going to introduce Dave McAllen who's going to be talking about securing your open stack private cloud technical lead and the open stack team at Cisco and Dave with that I'll let you get underway and introduce your co-presenter All right, great. So as Gary said my topic is securing your open stack cloud And I'm going to do an overview of lots of different security techniques because when you're designing and deploying a cloud You want to have defense in depth and cover as many Different techniques as you can in case one fails. You've got a backup and my co-presenter Regulatory will Come up the end. I'll introduce him later So if you've hit some of the other presentations in this room earlier today, you've seen presentations covering Building your open stack cloud using your open stack cloud and connecting your open stack cloud And I'm talking about securing your open stack cloud So we're going to talk about techniques and and tools to do that covering all three of these silos So to introduce first to have this conversation We sort of need to have a shared framework and a shared vocabulary to define what is securing your open stack cloud and So a traditional way to talk about it is to take your cloud and divide it into security domains and traditionally There's four security domains that make it easy to talk about the first domain on top is the external domain So this is where your cloud users and your project administrators live And it's also where the rest of the world live all the unauthorized users So this is a section of the cloud where you know the bad guys are and see Enter your cloud you want to have a strong set of APIs to To protect the the entrance of your cloud. So below that and yellow is the tenant API So this is where the the virtual machines that are launched the instances live This is where your cloud users live and you want to have your cloud users to be able to run inside your instances And also to communicate between their instances, but not impact the instances that are owned by others users inside the cloud So that's that's what that security domain is about and then to the right the green box is the data domain This is where the users store their data In the storage nodes and you want to have users to be able to access their data and not be able to access other users Data's that's what we're concerned about in that security domain and then in the bottom in the orange That's the cloud operators That's the management and the control plane and you want this layer to be invisible to all the other users And you certainly don't want to have Non-cloud operators be able to impact that domain in any way. So that's the one you really need to lock down and protect and then if basically if your cloud is secure then everybody's going to stay inside their box and The goal in that case to keep people inside the box is to mitigate these attacks or mitigate these breaches of people breaking out of Their box. So when we look at the external domain, this is our API endpoints in our web dashboard So we want to be able to let the good guys in and keep the bad guys out And it's a different techniques we can use to To make that happen is of course we're using TLS reason HTTPS to to authorize and encrypt user access We may want to put some web filters or rate limiting in front of the those endpoints to prevent denial of service attacks When we're looking at our data domain what we're protecting there is information leakage We don't want user data to escape the cloud in different ways And some ways to protect that is to use TLS to access that domain and then also encryption If the data is encrypted when we're inside the cloud, even if somebody walks off of the hard drive They still haven't walked off with any data. So there's some security tools there And then in the yellow domain we talked about the tenant domain That's also a sensitive area, especially in a public cloud You don't necessarily personally know the tenants using your cloud. So you've got to protect That those tenants stay within their assigned area and we can use various techniques for that of service hardening mandatory access controls and by providing the code that they actually run Inside that domain and Then when we talk about defense in depth We're talking about secondary attacks if somebody is able to breach out of their box into another box by deploying techniques Such as least privilege and mandatory access controls and encryption everywhere. We're going to prevent Minimize the breach if somebody is able to access an area that they're not they're not supposed to and So using this is a framework. Let's deep dive into a couple of these techniques So open stack of course is an open source software project So if we want to build a secure cloud, we need to start off from the very beginning by doing some secure programming And so it's an open source package Some of our brothers and sisters in the open source world have gotten some bad press lately And I think you know what you're talking about You know open SSL has gotten hit with heart bleed and poodle You know bash got hit with shell shock just last week some hypervisors were affected by venom So the question that we as a community should be asking together is how can we as a community? Reduce reduce the risk of this happening to open stack. Yeah, how do we not have a logoed vulnerability? happen in open stack and So to answer that question, let's do a little bit of research on some of the other recent vulnerabilities So here's a little snippet I went to get hub and pulled out this code and this is a patch that fixes a vulnerability If you look at the second line, they're basically what that's doing They added a conditional to say if I don't have two bytes in this payload Let's not copy two bytes out and then a few down few lines later There's another conditional that says hey if I don't have these bytes in this packet Don't copy that out and the comment is is most interesting because it says, you know Silently discard if I don't have enough space per the RFC So the designers of this protocol thought about this possible security vulnerability But the guy who wrote the code didn't implement that so does anyone know which vulnerability this is? Yes, that's heart bleed and it was just a couple lines of code fix It was really simple and it's it's a tenant that we should all practice as programmers and that is validate your input And this is an example of the the implementers of open SSL of the heartbeat protocol did not validate the input So here's another example, you know right from github. The same type thing the The control block went down into the kernel to access this control block and it had a value data pause in it and The code didn't double-check that value and so the user could provide a value that was you know too big So isn't C beautiful isn't that? Anyway, so down there there's some modular division that makes sure that pause doesn't get too big for the sector length And this is the fix for venom and again It was just a couple lines of code and it was just a very simple fix and a basically a violation of a tenant Validate your input. It's as simple as that. So if we learn something from these other vulnerabilities Let's learn one thing is we have a checklist of secure programming practices You know, let's put on it validate input and the good news is we don't have to stop with with a checklist of one is The the great guys in the OpenStack security team of OpenStack has put together actually a pretty long checklist for us of secure programming practices with details and Just this week the content moved from a wiki page to the the proper security OpenStack.org website at the bottom of that page There's a list of programming practices and on it. You'll see things like Validate the input from users Logging guidelines, right? If you're writing some code and you happen to know the password of the administrator Don't put it in a log. No one needs to read that or if you have the authentication token from Keats stone Just because you know it you don't need to log it Follow these guidelines will help secure your cloud more Just also some other guidance around secure defaults if you're adding some configuration values and you have to choose what default To use you know choose the most secure to be your default and that will help the eventual users of your code deploy a more secure cloud and So the call to action here for developers is you know follow the guidelines and reviewers Yeah, use them as a checklist to review and if you're an expert in this area by all means contribute to these guidelines It's it's open source and it's it's in get and get it and you can contribute just like anything else In addition to human reviewers The OpenStack security team has also come out with a tool called bandit And it'll scan Python code looking for violations of best practices of security And it'll flag right away if you do some things that make your code vulnerable to SQL injection Or if you didn't validate input appropriately it looks for a long list of Security violations and that's a great tool to use and I don't want to steal their thunder There's some presentations later this week on Thursday There's Bandit at 130 on Thursday and secure programming 220 so this type stuff interests you definitely hit those Those sessions and in the meantime check out that guideline and if you're writing code for OpenStack, please follow the guidelines So at this point we've got secure OpenStack code We need to install it somewhere and we want to install it on on a hardened Linux operating system And one way to harden it is through access control so access control is basically a policy of who can do what and There's two general types of access control the one that you typically see that we're used to is called discretionary access control You create a file and it's up to you to assign permissions to that file who can read it You start off and and it's just the owner can read it and then there's some problems So I've got a fix I'll just change permissions to world access and now it works and then I'm not going to worry about the security also discretionary so the user in a way can shoot himself in a foot by Being too permissive and setting things himself also in Linux Linux This is usually pretty coarse-grained. You've got sudo that can do anything and then you've got your users that can just access his own files a More appropriate way for an enterprise install of OpenStack is mandatory access control and in this case It's not the users that define access to each resource But it's the system and it's the system administrator and his policy that's actually installed When you install software packages and one implementation of this is is se linux or security enhanced Linux And it allows you to have very fine-grained control if you can do what and how this works Se linux has the decision engine which lives inside the kernel of Linux So it's very performant and you start off by every subject typically a process will have a type and then every every object or resource on the right is also going to have a type and The object could be a file. It could be a network access It could be permissions to do something on the system and then se linux maintains a database essentially of exactly who can do what within a system and In this case, let's say a subject such as the the nova controller process he boots up and he wants to read his configuration file So this read file request goes down. It's intercepted by the se linux Code inside the kernel and he'll check his policy Hey, is this process able to read this configuration file? And if yes, it's allowed to happen and if not then it's rejected and it's logged And this helps strengthen The security of the cloud with the defense in depth So even if a hacker is able to break into your cloud is maybe Sitting on a storage node and he tries to access the user data Probably he won't be able to because that shell he launched is not gonna have permissions to access any of the files on that system So it provides an extra layer of defense And let's see how this looks like in a in the open stack world. So when you're running se linux There's a dash capital Z which is can be appended to most standard Unix command So here's a list of nova processes running on my open stack deployment and you can see there's a very fine-grained access It's the the third column over there on the left You see there's a nova console type nova scheduler type nova conductor type that matches the processes that running So it's not just a nova user, but each process has specific permission So in addition to each process each file has a type So here's the the etsy nova directory and you can see the configuration files have type etsy And so basically only processes that have specific permissions to read Configuration files are gonna be able to access these files So somebody's not gonna be able to hack in and dump the contents of your configuration files Unless they their process has that specific permission and in addition to protecting Files you also want to protect other things like network resources So here's a list of listening ports on an open stack deployment and you see the Cinder has a type assigned for its port glance has a Type assigned to its port rabbit so No one's gonna be able to create these listening ports unless it's the actual process that has permission to do that and no one's gonna Be able to write to these ports unless it has specific permissions to do that So this makes The cloud extra hack proof because you're not gonna be able to talk to these resources unless you specifically have permission to do that and So in addition to the the permissions that are installed as you install any software package there's also some extra fine-grained permissions that can be added later and like just here in the The second paragraph here some extra permissions that are configured in particular for Nova and the Nova processes that can use mimcash For example only those three Process types are gonna be able to do that and none of the other ones will and this is extra configuration steps So it'll make your your cloud more secure So the takeaways here if you haven't used sd linux before to secure your cloud deployment give it a try it works The guys at red hat support it at rdo project org sd linux. I say it works if it turns out it doesn't work Sd linux issues you can report it and it gives you extra defense and depth for your for your cloud So at this point, we've got secure software running on a secure operating system Let's see how more how further we can secure our cloud and let's look at the data and to protect our data we want to encrypt it and As a security advocate, it's in my job description that I have to say encrypt everything Encrypt your data in transit. I want you to use TLS when you connect any two things Over IP and then encrypt your data at rest Swift why not have encrypted objects sender encrypted volumes glance encrypted images Nova encrypt your image your memory when you're not actively using it. So You know, we're not there yet Most of us do if we have a cloud deployed we don't encrypt everything and there's some various reasons why you know operator has to decide You know, what's the sensitivity of my data? What's the risk of a breach and then what's the complex complexity and risk involved of deploying this encryption and You know, technically, what's the big stumbling block? What's the challenge for for encryption and the answer is key management? It's pretty easy to encrypt something but if somebody else has to decrypt it you have to have a key somewhere you know, there's no mat to hide the key for for Nova to find and The good news for OpenStack for key management There's an answer to this and OpenStack has key management as a service and that's the project Barbacon and In a nutshell, here's kind of how Barbacon works and it's the same as any other OpenStack service if you want to If get your secret or your key you'll first authenticate with Keystone or your authentication authentication service with your credentials and you get back a token and then if you want to get your decryption key from Barbacon you send it a request to send it your token and Barbacon will verify with Keystone that your tokens valid for that particular secret and then we'll pass you back your secret or your key and This can open up lots of different use cases you can do by distributing keys in this friendly way you can Enable server-side encryption you can enable TLS more broadly because you can Manage certificates and their life cycle and a lot of other use cases and To help keep your secrets secure Barbacon supports a number of different plug-ins in the back end You can use you know dog tags some other CAs and it's extensible to HSM's and other plug-ins And so I don't want to steal their thunder because there's some more Barbacon sessions later this week But if key management is something that interests you if you want to use that to secure your cloud Definitely hit one of the sessions or both on on Wednesday and and Thursday so I think encryption in general as As a widespread use an open stack. I think we have a way to just to go I think because the community can work together and I think Barbacon can be a tool for that so If you work on an open stack project that has key management needs check out Barbacon and and let's work to to integrate Key management with all the different open stack projects So the hot topic today as everybody know what that is Containers containers, so I have to mention containers today because it is literally the topic of the day so So that's the question what's up with containers and security so is anyone try googling container security? So I did and the first thing I get is a lot of links about these type of containers Of course and the links are kind of scary to be honest Right there lots of stuff about you know border control and customs and homeland security if you're from the US And they're basically saying that Things can be smuggled into and out of countries and we don't know what they are because they're just inside these containers And it's like so I'm scared already and I don't even have any relevant links so So I'll improve my security my web search a little bit and I'll add LXC or Linux or something to container security But I'm still scared because I'm hitting all these security blogs and all these security bloggers are saying things like well containers don't contain and Different containers on a system will share a kernel All the tenants can access the infrastructure directly and it is not intended for multi-tenant and This is sort of nuts. You know what what's going on here? You know and if you think about if somebody just read all those security blogs and they were plopped into the conference today And heard all the buzz about containers. They're really confused You know what what really is going on here? So are there a lot of about two minutes to sort this out? And so the first thing I'll say about containers is there's lots of different use cases about containers and So if you're gonna judge whether or not a container has the correct security profile for your use case You're really gonna have to define what that use case is and what that deployment of containers are so We'll talk about Actually, my favorite version of containers. It's a nice simple boot to Docker This is something that is really easy just to get something quick running So I'm a Barbican contributor and I haven't seen John this week but another contributor of Barbican has put together a container that contains a keystone deployment and in one container there's Keystone and all its dependencies and an operating system and a configuration for keystone it's got a service token it's got a bunch of users and Two commands on my laptop and I've got keystone running on my laptop, which is perfect for a development environment. It's like that's terrific I wouldn't use that in production wouldn't use that for a public cloud, but But you know for development environment, it's pretty cool But yeah, so what what is the security implications here? And the first thing is is what you know, I trust John and that's good But so through that so I would not use a container for somebody I didn't know right just like you wouldn't unload Download a random application and run it on your your laptop without vetting it You know use that same set of common sense for for containers And then the other thing that I think containers should use more of is insist on sign containers You know if somebody hacked into John's you know Docker hub account and replaced his keystone image with a different one That had some malware then I could wind up with malware on my laptop and that would not be good So I think we should all get into the practice of that if we're gonna be building containers and sharing containers Hey, let's let's go through the extra step of getting a PGP key and and hashing and signing our containers So to other people can know that what we've what we've given out what you've downloaded is actually what we intended you to download and install another use case of using containers for deployment is call off so call is a Stackforge project and it's also Just like my boots a docker example It is an example of using containers for installing an open stack and this is kind of cool too In this case you take open stack and you break it up into microservices You've got maybe Nova keystone neutron and then you define those as as microservices of open stack and instead of downloading the source code directly and all the dependencies one at a time and Installing that put it all into a container And if I want to build a cloud you know download the appropriate containers install the appropriate containers and now my clouds running So that's kind of cool You know is this secure and it's well what we're doing is we're using containers to replace native processes So that's that passes this mail check. I haven't made security any worse by putting the open stack microservices into Containers so that's pretty good Have we made security better and maybe a little bit because what we've done is we've made Install of a service or upgrade of a service an atomic operation, which is kind of good because that makes it simple and predictable and Simple and predictable or is good for security. So I'll give that a thumbs up But I still have the same concern as with the boot stalker example. So if you have a distributor that's building Containers for you insist on sign containers and make sure that you trust the the distributor of the call of containers and Do a quick quick hit for one more container example and that's magnum that was part of the keynote this morning I'm sure there was other sessions During the day about magnum and magnum is not an installation shortcut, but it's actually you know containers as a service Managed by Nova. So this is Nova in addition to being able to launch virtual machines for you This is Nova also being able to launch containers for you and these containers contain your cloud Applications, so this is exactly what the security bloggers, you know warned us about so That's got me concerned a little bit, but you know checking into the the magnum solution They do one thing right, you know, they call them pods or bays But essentially instances when you say launch my container it'll schedule that container to run on a particular instance and it makes sure that for one tenant all of their Containers are bundled on top of one resource with one kernel and if he had there's another tenant He's gonna be running on a different instance with a different kernel. So you don't have that shared kernel problem So that's pretty good. So Is they're thinking about it which is good and that's the main takeaway So there's lots of container use cases I'm not running scared yet because I see that people are thinking about it But you know definitely be careful and do your homework and if you have a particular use case and a particular deployment in mind Yeah, make sure it matches and just to cry out to everybody, you know, definitely Don't download and run random containers. Let's so let's all get in the practice of practicing safe compute Also, I want to plug a talk on Thursday if you want to dive a little bit deeper into Docker containers and a trusted model is a talk Thursday at 2 20 to deep dive into that and then So everything up until now we've talked about Deploying a cloud operating a cloud That type of thing. So let's talk about security from a user's perspective. So application tiers So if I'm a cloud application developer the model on the left, this is how I think of My application as I develop it. I think of these different tiers I might typically have a web server tier which sits in front which talks to an application tier below that and The back end at all We've got my database tier and I'm thinking about these things are loosely layered, but they're scalable because I want to have You know lots of them. I Want to have wanted to be resilient and fault tolerance when some load balancing So this is sort of I've got some dependencies in mind I've got some properties in mind, but I don't care about things like broadcast a multicast I don't care about IP addresses But then I go to my neutron dashboard to configure the network that I need and I'm confronted with routers and networks and subnets and IP addresses and It's not what it's not what I want. It's not what I'm thinking about So there's a layer of complexity here a layer of complexion of complication which can lead to mistakes and mistakes lead to security holes so what can be done about this and One answer to this is group-based policy So group-based policy is a stack forage project and what it is. It's an interface for capturing application intent So it's an interface that captures the things that I just talked about If I want to define the relationship between a web group and a database group I'm gonna have a set of rules and the rules are gonna be Only my my web group can talk to my database group. You know, that's a pretty good rule Let's say I want to have a secure database group. So I say, okay, let's drop a firewall in there I want to have a highly available database group. So boom drop a load balancer in there and this is how You know cloud application developers think and this is the kind of interface that they would really like and what group-based policy Does after you define these relationships and these rules? It talks then to the neutron driver and the neutron does the heavy lifting of turning that into subnets and firewall as a service And and load balancer instances and that makes it really handy and by abstracting these policies by automating them It supports all the interfaces you'd expect CLI dashboard or through heat for your your automation It's a group-based base group-based policy makes it easy to deploy the application security that you want and That makes it less error prone and therefore more secure And that you saw it made sense in a simple case in a more complicated case You might have a bunch of existing applications. You have three tiers All you need to do is you set the rules for each of them and then group-based policy will turn that into configuration for you So my takeaways here, it's hey, it simplifies your configuration It's gonna make it less error prone and then the errors is what leads to to security breach And if you're interested in this there's a couple talks Wednesday 5 20 is a user session Some IT guys took a bunch of their existing enterprise workloads They migrated it to OpenStack and they use group-based policy to make it really easy for them to get that up and running Quickly so I recommend that one and then Thursday. There's a hands-on lab So if you want to get get your hands dirty Configuring running that definitely check out that session Thursday at 11 okay, so up until now I've talked about just software and The cloud has to run on something right so there's hardware that are too so if we're talking about defense in depth defense at every part we need to talk about the hardware too and Is an interesting marriage between the two because with the cloud you've got lots of great things lots of flexibility Things available on demand you got broad access. You've got resource pulling rapid elasticity I want that flexibility of the cloud But then we're looking at hardware and from a security point of view hardware is really really interesting because it's you know relatively immutable It's got a very small attack service very reliable behavior and a lot of times it's certified you have FIP certification or other government certifications and It's really cool if you can take the properties of both of those and marry them together and to talk about this I'm gonna Introduce regulatory from Intel to talk about some interesting things that Cisco and Intel have done recently to Make the cloud more secure Hey, I'm told that I'm between you and alcohol and meals, so I'll keep it to three slides. Oh, hopefully okay I work in our data center security products group and I do a lot of security architecture and development and And this I really like this slide that they put together. Okay cloud presentel hardware End of story. Okay. I don't think it's that easy, right? You know the last three years four years Intel has been focused on one thing Okay, we want to bring what is called integrity assurance or trust assurance to open stack clouds For all the compute platforms compute could be just your standard compute workloads storage network anything Okay We are continuing to enhance that integrity assurance up the stack We want to make sure that as a tenant your workload your VM is integrity protected Forget what the service provider you trust the service provider You have the visibility to what the integrity there is but as a workload you want to protect the integrity and confidentiality of it So what we're trying to do is we're trying to say hey your platform is trusted Keep going up the stack as high as we can VMs workloads apps Okay, next 10 minutes. That's what I'm going to talk about There are three primary use cases that we are trying to enable. Okay. The first one is trusted boot having the assurance that your workloads at a service provider are in fact running on Trusted infrastructure the hardware is trusted the firmware is trusted the BIOS is trusted The OS and the hypervisor are trusted Having that visibility having that assurance is number one for a tenant Okay, it's called trusted boot and It's being part of open stack for about two to three years now Started in the Folsom release. We have the right extensions into open stack to make this happen so There should be no reason why you shouldn't use this on your specific open stack private cloud implementations Okay, when you if you are asking someone else to provide you the cloud You should insist that they enable trusted blue in their open stack clouds Okay, the next thing we did on top of it is we said hey there are very many Regulated applications things that need to get control where they run where they migrate Maybe they are PCI DSS apps. Maybe they are HIPAA related apps. Maybe they are just apps that need to have data sovereignty and workload sovereignty So we enabled what is called geo tagging or asset tagging in hardware Whereby you can do what is called boundary control now you can say through policy this workload will only run in this geography and The open stack scheduler is going to control that for you if you're trying to migrate outside It's going to say hey you violated a policy and it's not going to let you while it But now the question is how about the storage volumes that go with that? Yeah, your VM may be running in Germany, but it could be mapping a storage volume that somewhere else So we took the same approach we did for the Nova scheduler to the Cinder scheduler as well So we wrote extensions to the Cinder scheduler Which will ensure that when you're creating a storage volume or when you're attaching a storage volume When you're migrating a storage volume, it's going to enforce the same geo policy It's going to say hey this VM and this storage volume need to be in this geography You can't map them if they don't come in the same geography You can do that and it's enforced in hardware and it's attested in hardware Okay, this is not upstream into open stack yet Maybe we try to upstream it into kilo. There's a lot of activity in the community on this whole geo tagging and boundary control pieces We are hoping that it gets into liberty Okay, and the reason is simple. Okay, these are changes to schedulers and The open stack community is not interested to put any new features Into Nova they would rather fix the 1500 bucks that are there in Nova today Okay All this is good. Okay, it's giving you the assurance up to the OS the hypervisor level But now you are a tenant you're hosted by somebody else How do you protect your workloads? Okay, that's where the last one comes what we call tenant controlled Workload protection. Okay, when you take an image and you put it with the service provider You encrypt it and you put it in glance the stuff that Dave was talking about encrypting images in glance but the key is You own the keys in your enterprise the key management system is residing with you Okay, you when a time comes when you want to launch the image The open stack controller moves that request to a specific compute node somewhere Before Nova launches that VM image It's gonna have to ask you the key management system that's residing in your tenant Hey, I need the decryption key for this image so that I can launch it and The key management system is gonna have what we call trust based retrieval of keys The service provider has to prove to you That VM is being launched on a trusted server That the server is trust booted. It's in the right geography It has all the appropriate compliance requirements before the key management system releases the key Not only that it's gonna release the key Wrapped in the hardware rooted key. It's in the TPM So doesn't matter anybody gets that packets in the middle they can't do anything with it The private key is in the TPM on that server That's the only entity that can decrypt that one So the VM is on a server The TPM decrypts that one After we know that it's a trusted server and then it launches that VM image on that server So the only time the image is in the clear is when it's running on that machine This can be VM images. It could be data. It could be anything else But we are taking the workload Confidentiality protection away from the service provider putting it in the hands of the tenant So even if a subpoena comes to the service provider, they can't do anything there because the keys are still in the enterprise Okay, this is what we announced a couple of days ago. It's called cloud integrity technology three it's available in Beta form right now with the launch in June It's going to work with ice house to start with and It's going to be for Juno as well as for kilo Okay, there are demos of this at the Intel booth. So definitely urge you guys to go take a look at it Okay, of course, I can't finish any talk at this summit without talking about containers We are doing the same exact thing the same model for containers and And There's a talk on Thursday at 2 20 I'm going to talk about trusted Docker containers and VMs in open stack cloud the stuff that they've talked about Having the same control plane, which is open stack. How can you orchestrate? trusted containers and trusted VMs is Very transparently, that's what going to be the talk on Thursday. So if you have interest in that one do come for that okay Now Intel and Cisco have been collaborating on this project called white lightning. Okay Raphael is here. Yeah, he's the guy from Cisco if you want to know more about white lightning He's the sponsor of that from Cisco and the idea was to take everything that we did in hardware the trusted VM stuff that you saw before and Have that applicable to the various cloud services that Cisco is building Whether it is infrastructure as a service platform as a service or even the SAS services that they're going to build Okay And the interesting thing about white lightning It goes beyond the standard TPM as the root of trust. There are many Cisco appliances Cisco devices That don't use TPM. They have a different hardware root of trust So we are trying to explore ways to take the same concepts of trusted VMs But apply to all the Cisco devices Okay, the way this would manifest if I can kind of show you a little complicated chart here We are doing current attestation on the bottom for OS VMM VMs Geotags all that stuff, but we are extending that for IOT devices and the switches and routers from Cisco as well So a very uniform hardware root of trust a very uniform attestation model a very uniform trust model That goes across x86 hardware Routers switches any of the IOT gateways and IOT devices That's the goal of the white lightning project Okay, we are not there all the way, but that's where we are driving to with the Intel Cisco combination on this one Okay, so the key summary from my side before we have Dave back here is Use hardware as a way to provide you a root of trust Okay Hardware by itself is not going to be enough. You need attestation solutions. You need trust solutions Intel's providing those they are going to be integrated into OpenStack. There's already a lot of integration happening So at the end of the day where we want to be is having cloud management software like OpenStack Seamlessly use the security primitives that are in hardware so that you as a tenant with your workloads don't have to worry about Infrastructure security infrastructure integrity. That's the goal okay Right with that Dave come on up And if you guys have any questions anything that we can answer together, we'd be happy to do so. Yes Today is 1.2, but we are very actively moving to 2.0. Hi, I'm Marsha Michel Nist We have a question because we are trying to figure out how to prevent applications running on the stack to Attack when another access data securely, so you addressed that earlier and I'd be interested in learning more About how to do that exactly There are SELinux or just in general or so far we have a Ubuntu Linux so we can install SC on top, but Yeah, I'm not sure I understand like the specific a question. Okay, so Sorry go ahead. I Mean what we're trying to do is we have applications that are Coming into our stack. We want to be able to run them independently from one another They have access to us to a common pool of data some of them can be a second step into Multi-step application, but they're coming from different sources We want to ensure is that applications are not able to access data from one another Not phone home and at the same time we want to ensure that any communication through our Storage I mean we obviously using encryption and TLS for data transfer, but we want to make sure how how can we make Ensure that none of the data goes out and none of the application can attack when another So with the idea would have been to run into an air gap container But not sure exactly what's the best solution and since you eventually said earlier I'd be I'd love to hear more obviously Yeah, I think the weak point I heard there You said they were they were sharing you know common data and maybe that's the problem is that they should be Maybe separate volumes defined by sender and the different applications be part of different projects So they would have access to different areas of data So when you think about it what they need to do is they need likely to a talk to one another The way we are thinking of doing that right now is using go channels for them to basically Communicate information from one application to one another but not enable the application itself to a break the air gap and Therefore, I'm still trying obviously to figure out a better An eye a nicer solution, you know, it's just it's just right now. We're still thinking about it So it's a good opportunity for me to ask a question We'll talk later. I guess. Okay. Yeah, thanks. Sorry But For that side we have a mic right there So Dave you mentioned about TLS and a couple of use cases there, right? So one of the things I'd like your input on is the use of TLS between the service, you know service to service transport Comes with, you know, HTTP You know number one is you know your feedback on the importance of using Service to service TLS and if you had that experience with any kind of performance tax once that's applied Yeah, my experience on that is sort of theoretical. I know some people have tried it and they usually do it by Deploying s tunnel or some other form of offload to to front the different services instead of using the native services And I know some people have solved the problem, but but it's a difficult one and So you've spoken a lot about how you are Guaranteeing that a VM that I'm going to run is actually the VM that I intend to run Using, you know TPM and trusted route in the hardware. What are you doing about applications that are running? I have an application that's running in memory and you know, it has a Vulnerability it has a zero-day vulnerability and somebody is causing a stack overflow Or some other means of attacking that application So what are you doing about runtime integrity of applications? One of the most actively work projects at Intel is to do runtime integrity protection Okay, but the problem with that is I'm sure you know this it is not an easy computer-sense problem to solve Okay, if I build a watcher that's going to monitor my hypervisor Probably 50% of the CPU is going to go run the watcher as opposed to doing useful work Okay, now the question is what can we really do to take small baby steps in that direction? There are a lot of ideas things like hey can we Protect the the interrupt descriptor tables the global descriptor tables to make sure that they are not tampered with okay The other thing that we are looking at I'm sure you heard of a technology called SGX from Intel secure enclaves secure guard extensions, I guess is the marketing name for it That will give you the guarantee that Even if the OS is tampered there is hardware proportion hardware protection to layers of memory So everything that's in memory is going to be encrypted at all time and hardware controls that one That is at least on servers. It is 2017 and beyond But once SGX is available We can do a lot of the stuff that you are talking about your app has some secrets that in memory They can be within the secure enclaves that the OS cannot Cannot control until then runtime integrity is a is a tough challenge for everybody I'll stop by and talk to you after sure We've got time for one more Yes, I my question is on one of the slides you mentioned GBP as being able to enforce service function chaining and It is not available today at all in neutron So can you elaborate a little bit what you think is available today? And what may be available later on? I'd recommend the the GBP session of our That yeah, I yeah, okay, so it's it's definitely not part of neutron as of now, correct? I don't know Okay, because you showed it in the slides. I'm sure Okay Yeah, I stole someone else's slide You got me All right, any other questions Only how quickly they can get to the beer, I guess Thank you guys. Thank you very much