 Okay. Okay, good morning everybody. Welcome to our presentation. This is about layer 7 security to Kubernetes with OpenStack career. First, some legal mumbo jumbo. You all know this stuff, so let's go on it there. And then about the presenters. My name is Ville Mattila. I'm a product architect of the next post-point network security. This is Manish Dabi with Intel. My name is Jamal. Can you hear me? Does it work? Okay. My name is Jamal Vesa. I work as a software engineer in Midokura and I've been involved with the stack for a while with Neutron and lately in Korea, Korea Core Netes. And here is the rest of our team, the key contributors. I want to especially thank Emmanuel and Lauri, we have worried. Unfortunately, they could not participate in the event, but they did all the heavy lifting here. Also, there is some acknowledgement for Pino and Binvan here. That is not Bin's real picture. Okay, about agenda. First, we will give you the problem statement, then solution overview and the design. Then we have a demo with video. Next steps, call to action and finally questions and answers. The problem statement. Well, the microservices and containerized apps are rapidly gaining momentum. That is good thing, containers are fine. But then, security-wise, we have a still major concern and challenges to overcome. Especially one of the biggest reasons is that there isn't really good way to do layer 7 security with the east-west traffic. For containers or even with the virtual machines. And layer 7 security appliances on the edge cannot or do not have any visibility inside the cloud's east-west traffic. At the bottom, there are some surveys about the containers, about adaptation. And there was one keynote figure where they said that there is a three-time more containers used than virtual machines. So, micro segmentation for containers. What is the gap and what is needed? So basically, we are stating that the network policy does basic access control and it is not sufficient in all cases. Access control is like boarding pass and the layer 7 security is then the security and the scanner when you are airport. And if you are clean, you can go inside. The container networking solutions like Flannel work well, but there is not multi-tenancy yet. So there are these kind of gaps here and there. The layer 7 security to governance with OpenStack Currer will allow us to do the micro segmentation, meaning very fine-grained control between the container services and have advanced threat protection for virtual machines and containers. Also, we can do automatic policy security insertation and follow all the normal container workflows. I mean here that bringing this kind of security appliance or container do not require you to do all kind of strange actions in your environment. It should be as transparent as possible for your networks here. Next, we will talk about the solus on component. So, what do we propose? We have created Forcepoint, Intel and Midokura approval concept that tries to solve these problems that Bill has already explained to you. So this is basically based into platform. One is a container infrastructure orchestrator based on Kubernetes. And the other one is an L7 security solution orchestrated by Intel OSC and implemented by Forcepoint container. The container infrastructure basically it's Kubernetes. It works in Docker. I think Kubernetes is enforcing to use Rocket in the future, but right now we're using Docker. Everyone knows Docker, so we're not going to talk about this. We are going to focus on the career project in my site. What is the career project? What do we try to solve? Maybe some of you, if you are here, means that you already know what's career and you have already an idea about this. But career Kubernetes is the downstream project of Midokura. We try to enforce to push it up a stream, so maybe you haven't heard about it yet. And we, of course, use Midokura as the SDN solution. And after that, after explaining that, they will cover in more detail the security solution. Basically, what's career? Bill has mentioned two problems that we try to focus here. We try to fix here. One of them is the multi-tenancy. I mean, with Flannel, I don't know if you know how Flannel works, but basically you assign a big pool of IPs. And from this net pool, you just split it in several ones and you assign, for instance, a slash 24 for each host. So any container can be routed to the other one. So that can work, but we are open stack. We want multi-tenancy for our customers or our private deployments. So Flannel is not good at multi-tenancy. And the other problem is we are in a process that some of us will want to move from virtual machines to containers. This cannot be a turn-off, turn-on solution. So you want to just move some services, try if it works, scale them, remove the virtual machine one, and so on. So you want the virtual machines and the containers running in the same network. And this is what career trust tries to solve. And the solution and the proposal is kind of easy. It uses Neutron as the SDN controller of the container orchestrators. So Courier is just a Neutron client. And don't think about this as a single project. It's an umbrella project that tries to cover all the container orchestrators. I think right now upstream there is only, yeah, this is the status of the project here. There is only Courier live network deployed, which takes care of Docker swarm. And, but in the future, now we are going to introduce the Courier Kubernetes, but in the future, maybe Courier OpenShift, well, it's more or less like the Kubernetes one, and Courier Mesos or whatever. So Courier tries to cover all this container orchestration or whatever it is you choose, and bring it to open stack. So currently the project is, the Courier live is already released. Courier live has basically the Neutron calls and the SDN binders. So think about Courier live kind of Oslo libraries for Courier. So everything that can be shared across all the Courier projects can be part of the Courier live. Courier network takes care of the Docker swarm and Courier Kubernetes, there is a downstream that I'm going to talk about now, but the idea is to move it upstream and so everyone can collaborate. Even if it's downstream, it's open source. So you can look at the code and try it if you want to. So how exactly works Courier Kubernetes? You deploy a Ginks container like with Courier, sorry, with Kubernetes. So I assume that you already have been this everything configured. So you call the Kubernetes API deploying a container, then the API what it does is to call a Kubernetes in the working machine to deploy that container. But meanwhile our service, the Courier Kubernetes, the green ones are the ones that belongs to the Courier Kubernetes project. Blue ones are Kubernetes by itself, and the red one is the rest of open stack, so Courier Kubernetes is watching that even in Kubernetes API, and once a new pod is created, it realizes it is created, and what it does is to call the API and create a port. This port brings you back a JSON or whatever with the information of the port, which already has the IP address of that port. And also Kubernetes API allows you, for any object that you have, there is a method attribute that you can write anything there. So what we do is to write the neutral port information in the Kubernetes pod object. So meanwhile Kubernetes has already created the container and tries to bind it. So the CNI driver, CNI is an interface for networking, like the live network interface for Docker, CNI is for Kubernetes. And the CNI driver calls the binder, and the binder reads the API, sees the port, knows which IP address has to have this port, knows the gateway, and combines the container in the network. If you see there are several behind the CNI driver, there are several binders. What about courier is almost 95% SDN agnostic. You only need to have very small part of the code that does the binding to the network to be compatible with courier. And mostly the same code that you have in Nova Compute for binding virtual machines, you move it to courier lip, and you will be compatible with the rest of the courier projects. And this is basically how it works. And after that, this is only for pods, this is the most complicated one, but this is the one that we are covering now. But in our approach, there is no security. All the pods can see each other. So this is when our folks from Intel and Forcepoint come in and explain the security part. So let's come back to this strong statement that access control is not security. So why is that? For example, here is some examples. If you have a public service running in the cloud, they need to have access from outside. So then everybody can contact your services, and well, that's it. So you kind of hope that your notes how to firewalls or security functions are working well. Then there are targeted attacks that jumps from network to networks towards the victim. And finally, if they gain access nearby server of the interesting or critical service, the access list again cannot prevent further compromise. So a modern security solution must inspect also the layer 4 to layer 7 payloads to catch the bad guy. Also sophisticated attacks use advanced evasion techniques to pass firewalls or another security functions. Here is an example. So here you can see the attacker who knows that the target is vulnerable and it is protected by some firewall or security solution. So the attacker will send the malware, for example, in very small TCP IP segments, and those segments are sent in the random order. So this basically means that the firewall cannot see that there is an attack, it allows it to pass through, and in the target side the TCP IP stack will resegment the traffic and launch the malware. And now the malware has finally got all access to all the critical components in that network. So it can do another hop there or then do the bad things. So that's basically how the modern attacks work. And the same scheme works with the virtual machines and the containers. So we should have a visibility here in the last hop of the networks. So if we are inside this overlay network, we are better because then we can protect everything very, very effectively. Further on prim we are, more and more little we know about what is going on and more resources we will need. Okay, next to Manish. Okay, so just to bring it all together. So from the components we had, like the courier stuff and the networking Midokura, which Jami talked about and the security part which Bella talked about. So now I'm going to come about the third part of the solution, which is the security controller. So this is a project which is, you know, basically, we have a demo at the Intel booth if you want to stop by later today. But the main purpose is to orchestrate security policies for networking in multiple virtual environments. So what we see in the bottom, for example, this sort of open security controller example is, you see an open stack environment on the bottom here and then you see, you know, maybe this is what we are demoing at the booth, but then you also see the Kubernetes, which is what we are going to talk about right now with the SDN. Then you can have another open stack environment, some other manual, some other data center with other technology. So basically it's very common to have your infrastructure in different places. And for the security administrator, it becomes a problem because they have to now manage security across all of these devices, right? I mean, all of these different data centers. So where the security controller comes in is, you know, on the top, you see all these virtual security managers. So you have, like, for example, it could be, have an IPS manager, you could have, like, you know, an action firewall, like the force point we are showing here. And then you have the physical appliance on the right, on the top, you see there. So they're already doing that in the data center of the physical appliances. They're managing the policies, you know, like, really explain, you know, to have, like, the advanced threat protection and, you know, anti-malware or whatever it is, right? Now, how do you do this with the virtual environment, with all these different data centers we have in the bottom? So this is where the security controller comes in. So it's really doing the orchestration and automation of all the security services. So think of it like there's a sort of security function catalog. So there's the force point action firewall. There could be other vendors, you know, IPS devices and so on. And then it's taking that and then actually, you know, deploying this as a virtual sort of, like, appliances across the distributed virtual appliances. So distributed virtual appliances is nothing but a logical entity on how you enable that across all these different data centers. So this is now sort of helping, you know, create, like, you know, a very powerful concept because the security administrator can now continue use the same tools from a centralized, you know, management perspective for all the policies, whether it's across the containers, across OpenStack or, you know, all your physical and all that, right? And then for the managers, security managers, this is another good point because for them, they don't have to deal with the lower level details for how to work on each of these environments. So this demo here is showing, which I'm going to walk through after this, is going to show you how it will work with the Kubernetes. But then, you know, you also have the OpenStack and so on. So now the security function doesn't really need to worry about the lower level details of the actual infrastructure, how to integrate with that with different SDN and different, you know, container or OpenStack or other cloud technologies. So the security controller is going to abstract that and then make it possible. So let me go into the demo now. So the demo, first let me walk through, like, the high-level topology. So this is, on the top, you see a simple, you know, like, there's a client, the evil client part, and then on the, you see the web server part. The web server is the vulnerable web server. Like Wille explained, you know, something that can be exploited. Now if there was only ACL, which we will show before the insertion is happening, you will see that how the client can actually use a shell shock attack because the web server is vulnerable to that, right? And then what we'll show is how we are going to thanks to career and then the Open Security Controller with the containerized force point appliance, we're now actually going to insert dynamically the force point you see in the center there and then it's going to do the layer 7 inspection and then it will stop the attack. So on the bottom you see, like, the components, we have kind of talked about those in isolation, three of us, but you see the Open Security Controller, of course there's the Kubernetes itself. So the first step is, you know, we're going to deploy, and I'll walk through the demo, right? We're going to deploy the actual security container, which is a force point security container. The next is we're going to go get thanks to, well I should step back, actually the zero step is, you know, because of the career that network binding has already happened and all the information is available in Neutron. So then we will go and deploy the security function. Then we will go ahead and get the information on what needs to be protected, which is really in this case the vulnerable web server part, right? And once we know that, then we will go ahead and do the service insertion, and that's where it will come in, where the force point appliance is now going to stop the attack. So that's how the demo is constructed. And just real quick, so, you know, we just deploy the appliance, which is again container areas, okay? So then we do, like, the actual insertion using the thanks to Midokura and Midonet and Neutron with the couriers, courier work that Jomi explained, and then finally we drop the layer 7 attack. So this is another physical view of the topology before I go into the actual demo. So you see again in the security control on the top, and then you have, like, the three nodes. We worked this demo on. You have the open stack controller itself. You have the Kubernetes, and then you have the worker. And the open stack, we're just using the, for this demo, we're just using the Neutron and, of course, Neutron and Keystone for authentication. But then we have the Midonet agent, of course. And then we have the Kubernetes pieces, and then what we are showing on the bottom here is the actual Docker image for the force point, which is a security appliance, and then the actual next-in-firewall security part, okay? So let me quickly walk through the demo now. So this is like a UI for the security controller. So before I start, you can see on the left side is all the configuration that we're gonna walk through. So the first is, this is the security controller. Again, there is an actual demo for this at the Intel booth for the open stack piece of it. This is the prototype we have done for the Kubernetes, just to be clear, right? So on this one, you see on the left, the virtualization connector. So the virtualization connector is very defined in case of open stack. You define where your open stack controller is, what your credentials are for Keystone and so on and so forth. So if you think of it that way, you'll have the virtualization connector. The manager connectors are really where you are going to define the security managers. So in this case will be the force point manager that we have to define all the credentials and information for. The third one is the service function, the catalog. This is where, you know, if you remember in that high-level architecture I showed, you have different virtual appliances, so this is where the catalog is for the appliances. Now in open stack, we actually have the actual image, which of course Q-cows to or whatever it is. And then there will be other details about the image itself, which is like the JSON file. But in this case, we're just using the JSON file because the image is already available through the talker. So we can use that. And then finally we'll show you how to do the distributed appliance, okay? So let me start the session here. So first we're going to add the... So I'm going to pause here. Next we're going to add this, this is where the virtualization connector is. So this is where I'm going to... We are using SDN controller, so we're going to use the middle net, of course. Then we have the Kubernetes and then on the bottom we have the Keystone, and that's because of the Newton. Remember, because the Newton part is what we are using. So after that is done, so now that I have defined how to reach Kubernetes and the infrastructure, next we're going to go and create the catalog. So this is where I'm going to import the image. Again, if it was the... In this case, we're just using the JSON file because it's got the information on how to authorize the image and download it from the talker. So once that is done, okay? So now you will see... Okay, once that is done, you will see, go back, let's go and quickly check. So the image is there. So yeah, you can see the force point. This is the inspection image that Will has created. So once we do that, now we're going to go and create the appliance itself. So this is the definition of the appliance for the Kubernetes environment, right? So we're going to call it container distributed and then we're going to create like a spec for that. Tight to the data center we created. So once this is done, we're going to go back and check. So now that I've created this, I'm going to make sure that, you know... So here's the test NS, which is the... So there's nothing in there. This is the namespace where we're trying to deploy the security container. Okay, now I'm going to create like a deployment spec. Again, the similar workflow exists for the open stack. This is where we created for the Kubernetes, right? So now I'm going to create that, okay, I'm going to use this namespace and this is where I'm going to actually deploy the security part. So basically, right now for Kubernetes, for the open stack we have much more options on how you're going to deploy the security part or in that case the VM, of course. Okay, so now you can see that it is running. So now that it's started running, let's go back and then we're going to do the actual security group. So this is sort of like... Think of it as now you're creating like infrastructure. Okay, so let's first check the... Okay, so this is the victim. This is the web server which is vulnerable. So now I'm going to add. So the reason I went there is to check which namespace. So I'm going to protect that namespace. So this, think of this as like, again, this is the basic first version we have, but think of this as where we are creating like a... Okay, I'm going to protect this namespace, okay, with the force point container. So now I've done that. I'm going to go back and let's just check that. So okay, so this is... By the way, this is the attacker. This is the evil part which is going to attack the vulnerable web server. Okay, and then I'm just going to show you the neutron. Thanks to the career, we'll have the neutron. So let's highlight the three ports. One is the security container, other one is the attacker, and the other one is the victim. Okay, and then this is the insertion. So basically I'm seeing right now there is no insertion, so Midona CLI is just telling you if there is any service insertion happened or not. Now I'm going to try an attack, and it is successful because we have not done the insertion yet. Okay, so this is the shell shock attack, which was successful. We just tried the regular curl to make sure the server is working. So again, from the client to the server, attack successful. So now we're going to go ahead and go back to the security controller and do the bind. So bind is when the actual network redirection is going to happen. Again, thanks to the neutron information we already have, we're just reusing all of that and then just doing the insertion. So now that we have done that, let's let it finish. Okay, that's passed, you see there. Okay, so now you're going to go back and see the insertion, and you can see that the insertion has happened. Okay, and then we're going to try the curl, which should work, yeah. This is the normal curl, right, which should work, which is a normal traffic. But we're going to try the attack, and it's not successful. So let's just go back and log into the force point container and check the logs. So we're just going to find the ID, and then do a Docker exec and then see the logs. So basically the traffic was stopped by the inspection engine, the level seven inspection engine, and there you go. So just to summarize it, right? So again, few things. To summarize it, the few things, okay? So one is the security controller, making use of multiple type of environment. We already have open style, we're demonstrating downstairs, this is the first time we're trying something with the Kubernetes or the container environment. And thanks to the work, courier work, we're kind of leveraging already what we have done with the neutron. So there was not much of an effort needed because there was already, all this information is available through the neutron. So that's number one. Number two is, of course, the security container, the part that Ville mentioned, right, where security container itself, how it's protecting for the L7 attack. Like in this case, the shell shock could have gone through if you have a normal Kubernetes environment with no layer seven security. Again, the physical appliances which are there at the border of your north-south, what we call it at your data center, they're not gonna help in this case, right? Because both the parts were right there. So this is kind of the key point here, right? So it's being able to orchestrate this type of layer seven security in a Kubernetes environment. So I think at this point I'm gonna hand it off to Jomi. He's gonna walk through some of the next steps and call to action. Yeah. Well, basically, this is what we have. And as I said before, the part of middle courier is a proof of concept that we have downstream, it's open-served, it's downstream. So if you are a developer, you are interested on contributing courier, you have the reference here to the weekly meeting, the list to ask anything or to try to collaborate. So I would like more help on that. And also in terms of security in Kubernetes, there is a network policy already, Kubernetes 1.3 that we haven't used in this case. And we would like, that would be great if we tried to push the labels that we use for level seven as a standard Kubernetes network policy labels. That would be great as well. If you want information about force point, just contact. And this is the roadmap for open secure controller, some of the points of the roadmap. And also here more links and reference if you are interested on it. Mostly about upstream middle net and middle net service chaining, if you are interested on it, it was previous talks and previous summits, you can check it here. And basically that's all. So any questions? I think we have like ten minutes, so if you have any questions. Half ten minutes, yeah. Please. Probably, yeah. Yeah, I mean, I put the most, I mean the, not the roadmap of courier, Kubernetes on here. So if you want, if you want to see the roadmap upstream, there is. But in Kubernetes you have to read the network tags in the port, right? Ah, okay, yeah. Yeah, probably, right? I mean, where did you get the information of? Yeah, once we get the tag information, then we can take the action based on that. Is it? Yeah. So that's your part because I don't know from where you read information from your turn to get them. Okay. Okay. So basically once we read the label, we can take the action. See, right now we were looking at the neutron, right? So what we want to do if possible, I mean this is of course one of the projects, right? So for us it's like, okay, I want to have like a solution which works like I was showing, you know, for different environments, right? So courier with neutron is one environment, right? Now if you want to go natively, right? Then if there is a way for me to read where I want to do that sort of thing, right? And then ability for me to do the rest of the insertion, right? So those are the two things I need from the network, right? One is I need to know, okay, the information of what to redirect. And the second is I need to have an API to do the redirect, right? So right now we're exploring some of those things with the Kubernetes and we saw that some of the labels, the L7 policy label and tags and those things is where we could sort of take the next steps. And honestly we have not done deep work there, right? But just want to see if what you can explore over there. The reason we started working with courier, which was kind of like it was really easy all the information already there in Neutron and because of the work we have been doing with the Neutron so far, we can just do the service changing insertion and all that, right? So hopefully that makes sense, right? I understand that it should be possible, yeah. So the controller itself is the same. The security controller is the same. It's just like a... What's that? The open security controller is open source. Yeah, yeah. So you're asking about availability, right? Yeah, yeah, so basic. So the controller is going to be the same, the open security controller is the same, right? And it just works with multiple environments. So I think the question is, can you get it to... Is it the same project? Yeah. Yeah, yeah, yeah. No, no, no, no. So it's a standalone project right now, right? It's under... We're trying to get it to the next level where we're going to have it open source over the next few months. So you'll see some announcements about that. So to answer your question, this is more like a work in progress right now, and the idea is to open source it, but it's not available through Kubernetes. It's a standalone project. Is that the answer? The part that's grids from Kubernetes and writes that the courier part is open source. I mean, it's downstream, it's in the middle courier repository, but it's open source, and we try to push it to open a stack. Yeah, that part, they will have to do it. So the courier part is definitely, but the security controller is a standalone. So the VNFs have to have some kind of a plug-in. But they will... Yeah. Yeah, force point, they have a plug-in, for example. So if... Like the demonstration that we have downstairs is Palo Alto Networks. And then Intel Security, which is MacAfee. IPS. So each of these guys have plug-ins. So once they have plug-ins, then... Of course the downstairs, the demo is where the VMs are there. You know, there's a Palo Alto VM, IPS VM, and all that. This one is the container one. So this is a little bit like first rev zero of the Kubernetes part of it. But for the VM, the plug-in will be the same. So it will be a plug-in that each of these VNFs will have. Yeah. Yeah, so today we are working through the courier. We're gonna be working through, you know, middle courier. We're gonna be working through Plum Grid, which is the demo downstairs, right? And then we are working with, let's say, Nwaj and others, right? And then we're also working with... You have a native networking SFC Neutron project. So we start working with that. It's not done yet, so we're working on it. So basically the idea is to have, you know, like all these different network combinations that you have and be able to do the security across that, right? So for us they are just plug-ins. So middle courier is a plug-in, Plum Grid is a plug-in. And then this courier part and so on, so forth. So it's just having like one place where you can have a connection to all these different networking solutions. A few more minutes. Any questions? Two, three minutes left. Yeah, yeah. We have extracted our IPS user space engine and put that into the docker board. So it's basically one user space application which can start very rapidly in five seconds it is up and running. So that is the great thing about this kind of container security. You can easily scale it up and have, for example, in a hot-patching situation there was the hot-plate attack, everybody knows that one. So you can have lots of servers having broken open SSL implementation and you can put this kind of container in between them and protect this kind of hot-plate or LPC DNS attacks easily. And then you can, in your own time, bring down and up your new workloads. For example, if you have some server that has critical application that cannot be live-immigrated easily, then it is a really nice way to protect the environment in this case. But I think he was asking for the size of the image. The size of the image. We have our fingerprint package, but it's under one gigabit or something like that. Yes, yes. We are thinking about it. What is the best method? Let's say that if you need to scale up this container to, let's say 1,000 hosts, doing the dynamic update takes time. So maybe the better situation is that we give you a new image which can be rapidly deployed. So we are not yet decided which is the best way. Maybe a kind of hybrid would be one. So that's where actually the security controller will also play a role because swapping out as we scale up or down or just replacing with an updated appliance. In the meantime we're all, because we're aware of the insertion we have done, so we can start the new one, change the insertion and things like that. Of course we have not done the work yet, but that's where the security controller can do. Something like that exists for the VM part we have done for OpenStack. So the container will be much similar, but it will be different of course. We are very interested to hear about your thoughts. So please contact that email address and we can have a long discussion. It's a very interesting topic to us. Having good protection inside the clouds. Well, there was a really great talk yesterday about the holistic approach to the OpenStack. You should follow all the best practices there, but eventually they will be breached in some way or another. So the great thing about this Open Security Controller is that you can have multiple solutions there. And if you get the service changing work well in the future, you can have a double skin or whatever kind of really strong security. So in my opinion, of course, it is really good practice to have security solution there available. You don't need to use it to protect everything, but if you have a critical workload in clouds, then it is really wise to protect at least those ones. Yeah, I mean, like security is always, I mean, not just one thing, right? Okay. All right, thanks everyone. Thank you very much.