 All right. Hi, everybody. Thanks for coming today. We're happy to have you here. Today, we're going to talk about securing Kubernetes workloads using production-grade networking. And in turn, what that means, we're at an open-sex summit, so of course that means let's leverage some of the hardened networking solutions we have from Neutron and applying them to secure Kubernetes workloads. My name's Cynthia Thomas. I'm a systems engineer at Mirokura. We're a network virtualization overlay company. Hi, everyone. My name is Tim. I'm a software engineer at Google, and I work on Kubernetes. And I'll be talking to some of the Kubernetes topics today. Hello, everyone. My name is Irena. I'm an open-source architect at Huawei, and also a career core team member. All right, so today, we're in Barcelona. We're going to use the Catalonia Intelligence Agency as an example of the workflow operations that they have to face when they're trying to secure and provide resources in a secure manner. In a timely fashion. So the CIA IT operations, they're especially concerned with delivering, deploying efficient technology is in an efficient way, using the latest technologies, providing their agents with applications that let them do their day-to-day important and very confidential work. So two of the people that are members of the engineering team, there are Antony. He's in charge of DevOps, and he wants to help enable the CIA with DevOps and their methodologies. And then we have Bertha, and she is an eager developer, and she's keen on using the latest technologies to deploy her applications in an efficient manner as she develops and tests them. And so she likes some of the new stuff that's coming out, like Kubernetes. So how is the world before Neutron? What were the ways that we enabled networking for application development? Well, first, that project might get defined. The developer needs to define that environment. And it was basically like human-defined networking. The dev person was going to call up or send an email to the sysadmin and make some requests for resources, requesting servers. The sysadmin would install the OS, call the networking person. The networking person would do a VLAN to find a subnet. And then, of course, request firewall policy or whatever was upstream on the firewall physical box. From then, someone would do the plugging in. If that was wrong or the requirements wrong, that whole process had to start over. So of course, this is what was happening prior to Antony and Bertha implementing OpenStack and Neutron in their environment. So they were doing things the hard way. So as an example, Bertha, back in June, she knew of this project that was coming up in Catalonia. So apparently, all these OpenStack Summit attendees were coming to Catalonia to attend a summit. And the CIA was really interested in tracking their movement. So Bertha needed to help develop this application in timely manner. So she went ahead to find this project and the resources she needed. And from there, she had to start that process. And that could take weeks or months. Antony had to configure the VLAN that was required to connect these servers. When she was ready to do testing with external access, these firewall policies needed to be configured. And hopefully, there are no typos. But it was in the order of weeks and months just to get delivered. And of course, at the risk of not delivering the application on time and all these rogue OpenStack Summit attendees could go around to untrack. And then actually came an initiative for Project Quantum and essentially moved to Neutron. So started as an OpenStack Incubation project. In a few years, it matured to a pure networking as a service delivering project, having both tenant and admin APIs, which are quite technology agnostics, and adding support for various networking topologies, having pluggable frameworks that allows to plug different vendors L2, L3, and different advanced services that can be applied on various technologies and vendor solutions. There is also provides some extensibility that allows to add more specific APIs where it's applicable and supports advanced networking, like quality of service, and load balancing, VPN, and firewall. So sounds cool. So back to our friends, Antony and Bertha. Antony really wants to help his company to be more agile and more efficient and help Bertha and her colleagues to do their work fast and in a proper way. So he installs an OpenStack with Neutron, chooses the best backend solution that he can find. And then actually what previously took Bertha weeks and sometimes months, she can achieve in a few minutes because now she's able to manage her own topologies and actually her colleagues in different departments can focus on their own without trying to synchronize how they create this topology, how do the allocation of resources, how they do the addressing. And Antony is able to supervise on what is happening on this deployment and sometimes to put restrictions or interfere. So it's much, much better than it was before, but still Bertha needs to understand what topology she's going to create and what resources she requires. So along the way, I apologize for my voice today, along the way this started to dawn on people that VMs are expensive. They take a long time to spin up. They're not good for fast iteration. There's a fair amount of overhead running your VMs. They're not portable. There's all these problems. If only we had a way that you could virtualize part of the OS, but not the whole OS, so that you could run on a shared machine. We've probably all heard about this thing Docker now. It's sort of taken over the industry. Containers are an alternative to VMs. You bundle your application and the dependencies, your libraries, but you don't need to bundle the whole OS. You don't need to bundle the kernel. This is much, much faster than a VM. You can spin up containers in milliseconds as opposed to seconds to minutes for VMs. It's very developer-focused. The whole workflow is about fast iteration and not worrying about things that aren't part of your application. So this very simple UX really captured people's imagination. Everybody wants to use it. So our good friends, Anthony, he saw this, and I said, I have to have this. I can run it on my laptop on the plane, and I can do all my development. And when I get off, when I land in Barcelona, I just fire up my application, and it just works. So they want to try it out. But containers are very chaotic. Once you have all this freedom to do things, people do things. And you can't manage them. So they needed some help to manage what's going on. So along came this thing called Kubernetes. If you've been here at all this week, you might have heard about it. Kubernetes brings an API for managing containers at scale. And the API around Kubernetes is very application-centric. We want to focus on the things that you need to run your applications, and we don't want to focus on things like infrastructure and operations. Those are things that are solved outside of the Kubernetes system itself. We want to integrate with the existing infrastructure and ops flows. So looking back at our neutron setup, we want to integrate with neutron, but we don't want to re-expose neutron. We don't want to wrap it. Networking is infrastructure and security is ops. These are two things that are explicitly out of the sphere with Kubernetes, but we still need to address those concerns. So Kubernetes has introduced this concept. Sorry, I'll speak to a minute for the Kubernetes network model. The Kubernetes network model assumes a large, flat network space. Everything is shared. Everybody can see everybody. That's it. There's no noun for network. There's no concept of multi-tenant. You have plugins, which can let you decide exactly which network technology you're going to use. But all connectivity is enabled by default. It's implicitly single-tenant. You can see this in things like DNS, where we may be two different departments running on the same Kubernetes cluster, but I can DNS look up your names and vice versa. You compare this to the default Docker networking model. The default Docker networking model actually does have a noun for networks now, finally, and it focuses on more app-centric networks, which is really an interesting blend of infrastructure and applications. So we have some tension between our teams. Anthony is distressed. I can't have Carlos looking at my work. That's not good, but this is the CIA, after all, right? So within Kubernetes, there's this concept of namespace. And a namespace lets you take a big cluster and carve it up into smaller subclusters, if you will. And you take all of the main Kubernetes concepts like pods and services and namespaces and replication, and they are all namespaced. What a namespace gives you is the freedom to call things whatever you want to call them. If you want to call your database DB, that's fine. If Richard decides he's going to call his database DB, that's also fine, because he's in his own namespace. Namespaces are used for different things by different people in Kubernetes. There's different ways to use it. There's no right answer to it. But the important thing is it does not have a relationship to nodes or to networks. Every namespace exists on every node, and they are all part of the same network. So it's a horizontal stripe, not a vertical stripe. But it seems like an obvious place, if you're going to apply some sort of network policy, that you might want to tap into this. So we introduce a concept in Kubernetes called network policy. And network policy is just an API for the application developer to describe their application's connectivity sort of in the form of a graph. It says, who is allowed to talk to who, and then you can use the network infrastructure to enforce that graph. This is applied per namespace, so you go into my namespace and I say, I'm going to default disable all networking, except for the things that I allow. So now I can say, my front ends can talk to my middle tier, and my middle tier can talk to my back tier, or my database, but never from the front straight to the database. And these other random people out there, they can't talk to me at all. This does not yet cover egress. This is a beta construct in Kubernetes still. So you can see here some Kubernetes YAML. I know everybody loves to read YAML on a slide. But what you see here is our network policy object. And specifically, pay attention here. So the way we describe network policy is you start with the receiver. So the receiver, this is a selector in Kubernetes. It defines the group of pods, compute entities, that you can apply policy to. So this says, all of my middleware can be accessed from my front end. And you can see the same in the other side, where I say, all of my databases can be accessed by my middleware. So I've just implemented the graph that I drew here. And everything else is default tonight. So while Kubernetes has a remarkable user experience, it still lacks behind a neutron with regards to what networking functionality it can support. In Neutron, we have a fully multi-tenant environment allowing overlapping IPs, while Kubernetes is essentially the single tenant right now. With regards to different network topologies that you can create, in Neutron, there are a lot of different choices. While Kubernetes is by, at least by definition, is a flat-shared networking space with IP per pod. When we talk about security, in Neutron, we have security groups. We have port-level security enablement. We have ARP spoofing. In Kubernetes, there is currently network policies for Ingress traffic only, but more to come. In Neutron, there are also different advanced services and also on the port level, such as quality of service, that you can apply bandwidth limitation and DCP marking, which is not covered by Kubernetes, at least at this point. And Neutron having both admin and tenant-facing APIs, while Kubernetes is a primary application centric-specific. With regards to containers, there are also different challenges that exist right now, and there is no holistic solutions that can address them all. For example, different container orchestration agents have different abstractions, some that they use. They adhere to different interfaces, like the CNI or the CNM. In Mezzo, there is isolation and network info in Kubernetes. There is currently no construct for network, but there are network policies. In Docker, there is a notion for network, but they're all different, and it's quite challenging to put everything together. With regards to the multi-host environment, there is a need to interconnect host. And with Flannel, for example, you can do the tunneling, but there are some environments where a different solution can be more appropriate. With regards to security, there is some potential risks at the Docker breach, where the containers network stack is actually interwebs into the host network stack, and sometimes running containers in the same VM will provide a better solution for security. In some environments, multi-tenancy will be required, and it is not that easy to achieve in the current containers orchestration engines. And sometimes, especially when there are some legacy applications that are moving into microservices, there is some transition period when the application developers will be glad to run both VMs and containers on the same networks. So having all these challenges, actually, Courier projects comes and tries to provide solution to address most of them. Courier, and you probably heard a lot of it during the summit, trying to bridge the gap between the containers world and the open stack world. And with regards to networking, it just tries to bring the neutron as a networking solution for different container orchestration engines. By its essence, it just takes the COA, the container orchestration engines constructs, data model, and translates it into the neutron. And if we take a look about the translation, we decided to do to bridge Kubernetes and neutron via Courier. So the translations that we have, we're taking the namespace, and it's actually translated to neutron and network and subnet. And by this, it's already provides some sort of isolation, isolated domain for the Kubernetes namespace. Kubernetes pod is translated to the neutron port, having its IP and MAC address. And in this sense, neutron is managing the IP allocation. When we take a look on the Kubernetes services, and service is by its essence, actually stable IP for the group of equivalent pods, it is translated to the neutron load balancer, while service endpoints are the load balancer pool members. If the service is supposed to be accessible from the outer world, so then it gets its external IP map to the floating IP on the neutron external network. And when we take a look on the network policy that Kubernetes has, it's going to be translated to the set of security groups that are going to be applied on the port level to enable all the use cases that Tim previously mentioned. And just a few words about the Courier implementation, it implements all the generic code that is required to bring different neutron solution to the COEs. And actually on each neutron plugin implementer is just to provide some binding script that will bind the container or the pod in case of Kubernetes and the host environment. So back to our friends, Anthony and Berta. Anthony is constantly looking how to improve the existing environment. And now he heard about Courier, he saw that it started to mature and he wants to be the first one to try it. So he adds Courier to his deployment. And now it's actually much more easy for Berta by using only the Kubernetes API and by expressing your application requirements through the deployment and having network policies to request what is allowed to talk to, what tier is about to talk to which tier, she can orchestrate everything from the Kubernetes API and it's going to be translated by Courier into the Neutron and Security groups. And Anthony is able to tune everything from the Neutron API for administrator and make it work as good as Neutron can bring. So now I want to touch a bit about the POC solution that we have working for integration of Courier Kubernetes. And we are taking the middle net as a plugin Neutron implementation, for example. And the solution is currently working as a downstream implementation for Courier Kubernetes integration. So if we take a look on the master node, in addition to all the other Kubernetes components, we have the Courier Watcher that actually checks the stream of the Kubernetes API events. And for each relevant entity that is created, modified or deleted, such as name space and pod and service, it gets the proper translation and calls Neutron API to create the corresponding data model in the Neutron. On the Worker node, we have the CNI plugin for Kubelet, which is the Kubernetes Worker node agent. And it is using the Courier CNI to bind the pod to this host. What happens while Courier Watcher creates the proper entities, it just adds annotations in the Kubernetes data model. So when it comes to the CNI, it can get all the required information of the created port, such as IP and MAC address, and it gets the proper binding. Courier Watcher also checks the network policies and then translates them to the security groups by which providing proper isolation that was required by the Kubernetes application. So just a few words about the implementation details. It's an event-based design. As I mentioned, it just watches the Kubernetes API. It is compatible with Kubernetes 1.2. Two main components, the Courier Watcher on the Master node and the Courier CNI on the Worker node. And it uses the generic Courier port binding where each vendor has to put very small binding script in order for his solution to work. It's used at the AsyncIO library, so it's a Python 3.4. And last but not least, there is no need for Kube Proxy to implement the Kubernetes services because we have this mapping to the Neutron load balancer. Yeah, and just to chime in a little bit on that too. So Kubernetes being an open source project, much like OpenStack, some of these functions can actually be interchangeable and are meant to be best of breed depending on the deployer. So in this scenario, Kube Proxy, as we saw previously, it takes that function of load balancing, basically, as we know it on the Neutron side. So it can be replaced with a Neutron load balancer as a service. So in this example, removing the need for Kube Proxy on the Worker nodes can, in turn, be achieved by the Neutron solution. So MidoNet, as an example, would be the Neutron solution. We know that scales hardened solution from Neutron achieving the layer 234 networking services. And that can just be installed on the Worker nodes. So that way, when the courier watcher is doing the translation from the Kubernetes API to the Neutron API, the Neutron plugin can just behave as normally usually does. So how is MidoNet a scalable Neutron solution? So for those who aren't familiar, MidoNet was started by MidoQura about six years ago. So it's a very hardened solution, production-grade networking already in use. MidoQura has been involved with OpenStacks since the B release, so strong Neutron solution. The way it was architected is that there's no bottlenecks in the network, so no service appliances or bottlenecks, and therefore has the ability to scale. So just depicting a little bit about the architecture here, the underlying hardware requirements are quite minimal, just IP connectivity between rather compute hosts or, in the case of Kubernetes, Worker nodes, and achieving layer 234 networking services through a simulation that happens at the edge and basically providing these layer 234 network services and delivering the packet with an overlay being rather VXLAN or GRE. You might notice also that there's no controller here. It's kind of sort of considered controllerless since the agents that are residing on the Worker nodes have the intelligence they need to pass the packet along depending on the logical topology. It also has a scalable gateway to allow ingress and egress traffic through its BGP scalable gateway, and these are all basically just x86 boxes. As you can see on the top, we're sort of depicting the API, the cloud platform that would hit the Meadonet API and then therefore in turn Meadonet takes care of the networking underlay or underlying infrastructure. So as an example, with Antony and Bertha, Antony in charge of the infrastructure as part of the ops team, he's already selected Meadonet as a scalable solution. He has the trust in Meadonet with OpenStack already launching VMs, so now he knows he can trust it at least to even bind the Kubernetes pods as well. So therefore, it also enables through Courier to have virtual machines and container pods on the same actual neutron network or logical network. He also values operational tools since he has to deal with the day to day for his infrastructure. So other things that he leverages are Meadonet manager. A Meadonet manager is a little bit depicted here is the GUI interface that gives him visibility into every single traffic flow that's happening on these logical topologies, whether it be neutron networks that are joining these VMs and containers with Kubernetes, any policy that might be applied, and therefore in turn which security groups or these network policies that help maintain the isolation between containers. So as another example, we mentioned Anthony wants to provide Bertha with these more advanced networking and isolation between these container pods, and therefore he wants to try Courier with Meadonet. So today he can take Courier with Kubernetes, leverage Meadonet and actually try it out. So now this is available in tech preview. I should warn you, obviously, half of this is fictional, half of this is true. I'll let you decide what you can. But you can go to docs.meadonet.org and the tech preview for Courier and Kubernetes with Meadonet is available there. There's an installation guide, operational guide just to give you a view. So one of the more advanced solutions in providing this type of functionality for Courier. So today what Anthony can do so that Bertha can start playing eventually with the native Kubernetes API is automatically launch the scripts to deploy the Kubernetes master node and that's gonna come up with the constructs that Arina talked about earlier. So the Courier, Watcher, all the regular Courier constructs that you require. Also there's a script to deploy the Courier, or sorry, Kubernetes worker node as well. So you can leverage that, you can deploy multiple, whether it be on a cloud or a virtual machine. So it gives you the ability to bring up multiple worker nodes with the required constructs for implementing Courier and Neutron and Meadonet. So in turn, eventually what we didn't mention but is maybe obviously, well, obvious to us I guess. Courier, Kubernetes, Meadonet, all three of these are open source projects. They continue to evolve and this is the stage where we are at today. So some of the more enhanced security policies that Meadonet can provide above and beyond what regular Neutron can. Neutron security groups have a whitelist type approach. Allow this and this rule gets involved or port level firewall. What Meadonet gives you the ability to do is it implements chains and rules for the security groups policies. And so that in turn means you can do actually more interesting things if you're directly hitting the Meadonet API. So Antony has the freedom to do more interesting things if he feels depending on the type of actions to join chains, join rules, do rejects, drops, linking chains, service type chaining type things. So much more advanced options that he can do. So as we wrap up, we look to the future. Within Kubernetes, there's a whole bunch of things that we know we need to do. I've been working downstairs at the container lounge and almost every single person I've talked to today has asked me about multi-tenancy. So I think it's unavoidable. It is going to happen in Kubernetes, but it is a very big problem. It's very deep. So we're gonna be tackling this over the next bunch of releases. Probable possible evolutions include finally adding network as a noun, adding finer grain policies, possibly adding rejection policies, adding egress, adding L7 policies, adding some forms of quos and shaping and bringing multi-tenant services like DNS into Kubernetes. Again, this will probably be over the next year as we develop these things. With regards to the Kubernetes integration status, so currently it's possible to say it's quite early stage. Both CNI driver and API watcher are under development. We have this downstream POS implementation, but the lessons that were learned from this POS implementation are currently pushed into the upstream of Courier with the proper process of submitting spec, blueprint, development reference, and the patches. So future work is just to complete the basic networking connectivity support and then add support for network policies, provide solution high availability, especially with regards to API watcher, so it's not going to be the single point of failure. One of the directions is also to add more better support for different platform as a service that consume Courier and use OpenStack networking, like Courier OpenShift, for example, and also a bridge between the OpenStack VMs and Courier Kubernetes, so run the containers inside virtual machines. Just how you can get involved if you are interested. So to get involved in the Courier, there is a community site, you have all the links here. We have the Git repo, so please join and your help with reviewer will be really appreciated. We have weekly meetings, it happens on Monday, so please check and if it's possible, join if you want to influence. On the Kubernetes side, we have our community site where we track all our processes and features. We also have our GitHub repo, I advise you not to subscribe to the repo, you will get more email than you know what to do with, but we also have Slack, we have thousands of people sitting on Slack every day, including dozens of Googlers who are sitting there, happy to answer questions and talk to people about how to use the system. Last but not least, of course, MidoNet, also an open source project, MidoNet.org being the open source website, welcome you to take a look there. You can also try MidoNet with our quick start guide, it builds an all in one Mitaka MidoQura five dot, sorry, MidoNet five dot two release, lets you at least get access to the RESTful API and actually it's pretty easy to expand beyond that too. So we welcome you to join as well, we also have the Slack open source Slack as well for you to join and if you have any questions as you try and deploy, so we welcome you there as well, so thanks for your time today. We welcome questions if you have any, and there's a mic on the left aisle. I have a question regarding to external IPs, for us in Kubernetes external IPs pose the difficulty because they are fizzdev in, so they need to be bound locally on the server, so how do you deal with that in such a dynamic environment? So there's two answers, one is the way Google Cloud does it, which we actually have a script that runs on every machine that watches a metadata service that adds it as a local route, but then Kubernetes actually works around that. Kubernetes, the way we do receive receipt of traffic, if you're using a service, we will set up IP tables rules to accept traffic to that IP address, so if the packet is delivered to the host, and to the host interface, before it gets a chance to be rejected for non-local address, we will trap it through IP tables and do what we need to do with it. So if that's not working for you, I would love to hear more. I guess I can add from a MidoNet perspective, what we usually do for access to an external network is it's achieved through the BGP gateway, so the BGP will advertise whatever this external network would be, and whatever the upstream router is, it's learning that floating IP address or external network from a neutron perspective, so that, therefore, is reachable from our gateways. I'd love to hear more about that use case as well, how you have to have it pinned there, so maybe we can talk a little bit after. Okay, thanks everyone. Hope everyone's been having a good summit.