 So good afternoon guys, it's interesting you have now back-to-back Indian speakers and it's that time of the afternoon where you start dosing off a little bit, if the coffee hasn't kicked in yet. Trust me, you don't want to be dreaming with an Indian accent, right? So get your coffee, make sure you wake up, networking is exciting, yeah? I see a few smiles, that's good. So my name is Karthik Prabhakar. I work for a company called Tigera. Tigera is fairly active in the networking community within Kubernetes Networking and specifically we work on plugins like CaliCo. We also are the co-maintenors of Flannel. We are also one of the co-maintenors of CNI which is the networking abstraction in Kubernetes. So today, what I wanted to kind of walk through with you is a little bit of Kubernetes Networking and give you some of the concepts in Kubernetes Networking. But before I do that, I want to give you some of the fundamental design thinking that went into building the foundations of Kubernetes Networking and want to contrast that a little bit with some of the design thinking that went into Neutron back in the early days. Obviously Neutron has obviously evolved over the last three, four years but I want to give you some of that contrast between Kubernetes and Neutron because there are some fundamental differences and that will put into context for you why Kubernetes Networking is the way that it is. And like every Kubernetes presentation, if you don't do a live demo, you're not worth the assault, right? So you've got to do a live demo. Assuming I can get to the network, I'd like to deploy a Kubernetes cluster, multiple nodes with networking, launching applications. I've set myself a target of five minutes to do that. I'm hoping I can do it in two, right? And what that means is I want, for those of you who haven't deployed Kubernetes before, I want you to walk out of the room and maybe you can even do it with me. I want you to walk out of this room and try deploying Kubernetes. Super easy. Lots of deployment tool options. Many of them are extremely easy to use and this is something that everyone can do, literally everyone in this room should be able to do it. Oh, and a little bit of a plug for tomorrow. Today we're just talking about concepts of Kubernetes Networking. Tomorrow we'll actually have a little bit more of an advanced talk talking about how you connect Kubernetes and OpenStack together, whether it's Kubernetes side by side with OpenStack or one in the other. And in fact, I'm joined in that presentation by Canonical and AT&T and AT&T is going to be doing a live demo of how they deploy OpenStack as a containerized application on top of Kubernetes. They actually use Calico for the networking fabric there. So a little bit of how we got here. So going back to the early days of VM networking, when people started thinking about how they connect different virtual machines to each other, especially in multi-ten started deployment scenarios, the easy way to do that back in the day was, hey, if A needs to talk to B, let's this connect A and B into an overlay network. And in the early days of OpenStack with things like Neutron, you try and provide this virtual networking concepts to users and allow users to create their own overlay networks. And then if now B needs to talk to C, guess what you put in a different overlay network. And very soon you have to worry about what happens if these A needs to talk to C, you actually back call it through a virtual router, and then pretty soon the users have to deal with some of the more advanced networking concepts, which very often they don't really care. They just want applications to talk to each other. So eventually it got to a point where you get this complex mess of overlays, and then you have to deal with SDN controllers to kind of force overlays into physical network infrastructure. You have very often with Neutron, the way many people have implemented it today, a mess of bridges, reswitches, policy enforcement points, security enforcement points, virtual routers, back calls. And case in point, this is a picture from OpenStack documentation of what Neutron with OpenVswitch looks like, standard OpenVswitch. For those of you who have this in production, my heart goes out to you. I've spent numerous nights and weekends the last three years, and I think a number of my former colleagues can attest to this year in the audience. When things break, and the things that break could be in servers right next to each other, troubleshooting this complex mess of overlays can be a real pain. The worst, when you add on things like Layer 3, DVR, VRRP, the complexity just goes on. And that said, you don't need to like Calico and other approaches where you can actually simplify the network, but this is not really a product pitch, this is going to be a technology pitch, and I want to get to the Kubernetes side of things. So here we are, we're here to talk about microservices. Microservices in a cloud-native world, right? Here's a picture of what Netflix's application flows used to look like going back a few years. It's obviously gotten more complex than this. And when you have these, this large number of micro-instances, microservices collaborating with each other, this traditional model of creating overlays between pairs of instances or groups of instances that need to collaborate with each other does not scale. And interestingly, because microservices are very dynamic, the flows tend to vary, things tend to come up and down fairly dynamically, we really have to look at a new approach because interestingly where the world we're moving to is sort of a serverless function as a service world with things, running functions individually in different parts of the infrastructure. And this concept of building overlays for everything is fundamentally a approach that has its roots in the early days of EM networking, and we have to move past that, right? So that said, Kubernetes, which is focused on how do you run containers at scale, how do you provide an infrastructure for running microservices at scale, took a different set of assumptions. First of all, the world is IP, right? How many of you have an application that does not use IP? So Kubernetes started with the assumption that it's going to assume that every node in the cluster has an IP address. It also made the assumption that POD has an IP address that's unique within the cluster, right? And so when PODs communicate IP addresses, other PODs that they communicate with know that IP address and it's the fact that it's unique. So that's a fundamental assumption and by the way, that is a little bit more evolved from sort of some of the early design thinking that went into Neutron. Kubernetes then adopted CNI, the container network interface as a networking abstraction to allow different vendors to plug in different networking plugins to provide that connectivity between PODs, okay? And today there is numerous plugins each with their own characteristics and different market segments that they go after. So we work with, my company works with Calico and Flannel, which are fairly popular plugins, but there's plenty of other plugins. The big difference that Kubernetes introduces, and in fact other container orchestrators have also adopted a new abstraction, is that it now decouples this concept of using network topology for isolation. What Kubernetes says is I'm using an IP address, it's up to the network plugin to decide how to connect the instances to each other. But I'm now going to give users a new abstraction, a declarative model by which they can declare what PODs need to talk to what other PODs. So users can declare in a YAML syntax using advanced concepts like labels and selectors how they want PODs to communicate with each other, which PODs they want isolated from each other. So that's declared as opposed to enforced in the network using network topology. And what that allows you to do is you can now build networking plugins that keep the network simple. Absolutely, you can build networking plugins that keep the network complex. You can do it, but you don't have to. And this is a fundamental design choice that Kubernetes made. In a Kubernetes environment because an application or microservice is served by dozens of PODs or hundreds of PODs, if you have a web server it could be running as tens of web server instances. You need a concept to abstract away this fleeting dynamic environment and that concept is services. So Kubernetes has the concept of services and we'll talk about how that's implemented in the network. And a service refers to a collection of PODs at the back end. We also have sort of a way of discovering services using a variety of options. You can use DNS, but there's other options you can use and Kubernetes basically presents all of these to you. And in addition to doing services, very often users and microservices want to do higher level sorts of traffic redirection based on layer 5 to 7 decisions, like HTTP headers, like application specific headers and for that purpose Kubernetes introduces the concept of Indress, which is something that confuses it. So let's start with a little bit of a dive into what the networking landscape looks like. We'll start with first simple east-west traffic, POD to POD traffic between PODs and Kubernetes and then we'll go on to the more abstract concepts like services, Indress and so on, right? So to begin with, in Kubernetes Kubernetes cluster as a collection of nodes. Nodes could be bare metal nodes. It could be virtual machines running on OpenStack. It could be instances running on your favorite public cloud. Doesn't matter. It's a bunch of nodes. Some nodes are designated as master nodes where you're running the Kubernetes control services. And some nodes are designated worker nodes where you're actually running the PODs or the applications. The main services running on the master nodes are things like the API server, which is how you interact with Kubernetes. You have a scheduler which helps schedule pods. You have things like the controller manager, which runs a number of controllers, things like add-on managers and other controllers that are part of Kubernetes. But when the master needs to actually schedule pods to run on nodes, it talks to an agent called Kubelet, which is running on the individual nodes and the Kubelet is responsible for launching those pods and running them. And so you have this Kubelet agent that's running on every worker node. Within that worker node I've just called out this concept of the host network namespace. That's typically the Linux networking stack as you and I know it. As you all know, it's possible to create multiple namespaces in Linux. And so we'll get to that concept next. So when Kubernetes launches a pod, a pod in Kubernetes is essentially a collection of containers with a shared network namespace. Typically what that means is that collection of containers shares an IP address, it shares a routing table, it shares some basic network concepts. So it has a separate namespace. And when a Kubelet launches a pod which is a collection of containers what it does is it now says I need to network this pod and get it connected to the network. So it can talk to other pods and talk to the rest of the world. The way it does that in Kubernetes is using CNI. And CNI is really simple. What happens is the Kubelet calls out to a CNI configuration file at a very high level. It calls out to a CNI configuration file which is stored under slash XE. And in this case I've used the Calico example, the system example Flannel which I'll talk to, there's other plugins. Each plugin has its own config file for CNI. And the Kubelet basically says hey I'm calling you and I have this pod. I want you to connect this pod to the network. And in effect what happens and I'm using the Calico example here Calico in this case the Calico CNI plugin says okay I need to give this pod IP address and I'm creating a virtual Ethernet that connects the pod namespace to the host namespace. And so that's sort of the first step and then it's really up to the plugin as to how it connects that virtual Ethernet to the rest of the network. Some plugins use bridges, reswitches, create overlays. Some plugins like Calico use simple IP routing. Flannel gives you a choice of both. You can actually use both. And just walking through this one example of what Calico does, Calico basically writes this new workload into the shared XCD namespace that's available for Kubernetes. That CD namespace calls out to another Calico agent called Felix running on the node and what Felix does is inserts routes for that pod's IP addresses into the host routing table. Saying if you need to reach that pod send traffic into this virtual Ethernet. Calico does not use virtual switches. It does not use bridges. It simply sends it to that virtual Ethernet. There's another agent running on the node called bird which is a BGP agent. And so all the Calico nodes within the cluster appeared together using standard BGP. And so what that means is when bird detects these new routes it advertises that route to other nodes in the cluster as an aggregated route. And so within a matter of milliseconds every node in the cluster knows that it's available from through this node. So any traffic just into that pod is sent to that node without any encapsulation, without any overlays. For those of you coming from a neutron obvious world this might be a little bit of a foreign concept but networking is really simple. It really is. I see a few people laughing. Networking should be simple. If you're looking at scale. So that's basically what Calico does. It has a pod so it's fairly non-intrusive in the host stack. And your actual data traffic simply flows with normal Linux routing. In this case Calico is not in the data part. It's simple IP forwarding. No raw magic to it. Flannel is another example. This is one of the early plugins available for Kubernetes. And Flannel has a variety of ways that it can provide the actual networking to give you an example of this. Again, Flannel is called using CNI. So Kubelet calls CNI. In this case it says it gives Flannel some configuration parameters like use minet as a bridge. In this case Flannel actually creates a bridge connecting the virtual Linux. And it tells you how to assign IP addresses. It says use host local. And the way Flannel works is that it assigns a slash 24 for every node in the cluster. And each node assigns IP addresses from that slash 24 to individual pods. From that point on Flannel has a variety of back ends to actually connect the different nodes to each other and to exchange routes. The most commonly used back ends for Flannel are either host gateway or VXLAN. What happens in host gateway is host gateway assumes there's layer 2 adjacency between nodes and in effect it does simple IP routing by sharing a slash 24 routes among all nodes in the cluster through HED. And so it does simple unencapsulated packet forwarding assuming there's layer 2 adjacency but in this case it's not running a dynamic routing protocol. Another common mode of operation for Flannel is VXLAN and this is a standard overlay that many of you are used to. So if you want to use an overlay for whatever reason you can absolutely do so with something like Flannel. And when you use VXLAN obviously you're paying a little bit of performance penalty for doing VXLAN encapsulation depending on your NIC card, depending on how it's implemented that might be raised to offset some of that performance overhead. But again this is a standard VXLAN overlays. So that's a couple of examples of how Calico and Flannel work. There are the network plugins in the market today there's interesting numbers of network plugins but generally what tends to happen is a lot of the plugins that are coming in tend to bring either specific market inputs or they tend to have specific features they tend to claim. I don't want to be in the business of comparing for you which is the best plugin I think you should really make that call for yourself based on individual plugins merits and if you're interested I would certainly encourage you to talk to more folks who are built Kubernetes deployment at scale and I'm sure you in terms of some of the plugin choices. But that's one part of the Kubernetes networking story which is how do you connect pods to each other. Again keep in mind pods are just dynamic instances that can be spun up and down and these are typically fleeting instances in Kubernetes right. And typically the way you would have multiple pods for an application is you would either configure a replica set or replication controller which is the older concept to say I'm running an Nginx application and I want 10 replicas and Kubernetes spins up 10 instances of Nginx each as a pod. The next concept which is the one I referred to as a very powerful concept in Kubernetes is this concept of having namespaces, labels, selectors and network policy which are all instruments to help you declare in YAML syntax typically how you want to isolate different objects from each other. Typically objects refer to pods that could refer to other instances too. And so first of all the concept of namespaces is the capability where you can take a Kubernetes cluster and sort of logically partition it into virtual clusters so that you can isolate different projects from each other. It is not true sort of multi-tenancy there's other elements of RBAC and other features coming into Kubernetes which do not come into Kubernetes but namespace is loosely that concept where you can sort of partition your cluster into virtual namespaces. In addition and Kubernetes you can assign an arbitrary label to any object. You can have as many labels as you want. So you can label a pod as a LDAP server. You can label the same pod as something in production. You can label this pod as a team project A. The other thing you can do is now you can now select using a label selector what you want to match on. So you can say all objects in all pods that have the label LDAP server and all pods that belong to project A. And using this concept of labels and selector is now you can declare in a YAML syntax or the deployer can say I want to allow all LDAP servers labeled project A to talk to all LDAP servers or LDAP clients in project A over port 636. And you can have multiple sort of policies. And what that means is now it's up to the implementation of network policy in Kubernetes to enforce that policy dynamically. So I've given this example here with Calico. The way Calico does that is by taking those policies and on each node independently instantiating whether objects with that pod exist. And if they do then it creates IP tables actually IP sets rules dynamically which is enforced at the virtual ethernet connecting into the pod. So it's enforced at the very endpoint. And similarly at the other end if there's an LDAP client it will similarly create IP sets independently on that end. So it's sort of a decentralized architecture and there are different implementations on network policy. They all behave differently and you should take a look at what your preferred implementation does. But fundamentally that's what they do. And so this concept of network policy is a really powerful concept which allows you to now enforce isolation with policy rather than using a network topology to enforce isolation. And this is something that the OpenStack community, specifically the Neutron community understand as Neutron looks to integrate closer with Kubernetes because this gives you the opportunity to fundamentally simplify networking as you look at combining Neutron with Kubernetes networking. Let's move on to the concept of services which is like I said a service is the ability to now have an abstraction to provide a front end for a collection of pods. And so in this case I've got a couple of pods which I've got this little pink circle here which are supposed to indicate, let's say they're running two instances of an Nginx application. They're both doing the same application they've got the same labels, they're basically two instances of the service. And in Kubernetes you can expose that service either with the YAMU syntax with kubectl or you can run something like kubectl expose and give it this command and that's how you expose the service. And when you do that this is typically what happens in Kubernetes. First of all it launches a little demon called kubeproxy and this kubeproxy demon runs on every node in the cluster. And kubeproxy essentially operates by setting up IP cables rules in the node that map the service into the back end pods. To give you an example of different ways you can implement the service abstraction one of the common ways is what's called the cluster IP. So if you were to say I now want a service for Nginx that's served by 10 different pods in the back end essentially this Nginx service will receive a cluster IP which is a well-known IP within the Kubernetes cluster and kubeproxy's IP tables rules take care of translating doing a dNAT on any traffic that's cluster IP and translating it to the actual pods IP address. So basically it's dNAT that helps you translate from the services cluster IP to the actual pod IP address. And the kubeproxy and the IP tables rules run on every node in the cluster. This takes care of when you have services within the cluster that need to use services so if you have a Redis application that needs to talk to Nginx Redis can say here's my Nginx well-known cluster IP and so that traffic can flow east-west but sometimes you need traffic from outside the cluster to come into the cluster and a way to do that is something called node ports. And what node ports is that Kubernetes in addition to having a cluster IP now assigns a port from a well-known port range typically by default 30,000 to 37,000 and gives a well-known load balancer like service. And now traffic coming to any node in the cluster destined to that port essentially gets translated using dNAT rules and sent to the actual pod IP address. Another way you can do services is using a service of type load balancer. In this case what Kubernetes does is that for certain well-known load balancers like Google's load balancer, Amazon's ELB a handful of other well-known load balancers Kubernetes basically creates the load balancer rules to translate services to pod IPs dynamically. So it's a way to do service mappings from services to that back-end pods. So far so good. So now you've sort of done this mapping of services to pods. The next step is how do you actually find what the services IP addresses are? Sometimes you may want to find the actual pods IP addresses because you might think, okay, I can do better load balancing than have TubeProxy do it for me so sometimes that might be preferred. And there's a variety of ways you can do that. You can use any of the classic service discovery mechanisms in Kubernetes that's perfectly fine. One of the default ones that Kubernetes provides, which you're welcome to use, is called CubeDNS. And today there are different DNS servers that score DNS, the number of implementations that can be plugged in as well. But the way CubeDNS works is that first of all it creates a DNS client resolver mapping within every pod. That says when the pod does a DNS lookup, it gets sent to CubeDNS. So CubeDNS essentially becomes a DNS server resolving client queries. And when a new service is created within a namespace CubeDNS essentially creates a DNS name that maps from the name of the service that creates a domain. So if you have a service named WebServer and in a namespace called ProjectRED it creates a domain called WebServer.ProjectRED. at service.cluster.local within the Kubernetes domain. And so when the client does a lookup for say WebServer, it'll get pointed to the default WebServer in that same namespace. But if the client wants to pick a different namespace, it's absolutely free to do so. Simple DNS and DNS works in the form of pods where the actual DNS implementation runs as a pod within the cluster. Fairly simple. Interest resources, again, it runs as an add-on in Kubernetes, just like DNS. Again, the interest resource essentially allows you to define an arbitrary set of layer 5 through 7 mappings that define what needs to happen when application mapping matching that pattern shows up. So in this case you can define interest controllers that can process that incoming data. And these interest controllers run as pods within your Kubernetes cluster. So I'm showing an nginx example here. You can use nginx as an interest controller. And what happens is when traffic comes into the nginx controller you provide a sort of mapping that says if the HTTP host header says the hostname is foo.bar.com send traffic this way to this pod. If the HTTP header is bar.foo.com send traffic this other way to the other pod. And so you can use this concept of interest controllers to sort of redirect your traffic based on application semantics and you can have sort of arbitrary L7 controllers for depending on what sort of applications you want. Again, a fairly powerful concept but it's sort of a new concept of many folks in the OpenStack community. And this is again an area where there's a lot of innovation happening as well. So I wanted to give you a demo so that's let's get to that. If anyone wants to time me in terms of how long this is going to take I might first need to combine my screens here. Okay, so what I have here is essentially a master node which I've gotten here in this terminal. Just to be safe I will open up a second window on that same master node. I also have a couple of worker nodes and these are instances running in the cloud. Pick your favorite cloud you can so that's node 2. By the way, are you guys in the back able to see this? Okay, someone give me a show of hands. Okay, good. All right, so let's start by working on the master node and actually make sure I have my cube config there as well when I run cube. So let's start here. So I'm using cube admin which is one of the simpler ways to deploy Kubernetes. There's dozens of deployment tools in Kubernetes today. Many of them come with networking by default, something like Flannel or Calico or other plugins. But if you don't, cube admin is a fairly simple way to get started. There's lots of other tools. So I'm using cube admin here because it's simple. So what you would do to start with on the master you would run something called cube idiom init. Let's hope the demo gods are kind to us. And essentially what cube admin init is doing is it's launching all of the key Kubernetes demons on the master and configures them. So if things go well, pretty soon it should come up and say, all right, things are looking good so far. Voila. So it says I'm now ready to have clients or worker nodes join the cluster. Before we get to that, let's do one thing. Let's start cube cuddle, which is how you sort of look at what's happening in the cluster. In fact, let's do a watch. Can't seem to type today. I am not able to type today. All right, there we go. All right, and so let me make the smaller. All right, so it's telling you what's currently running and basically you've got a bunch of processes running. These all running in the cube system namespace which is sort of the master namespace for all of the Kubernetes specific stuff run. Now, before we do anything else, we'll also have to do this. We'll have to connect some sort of networking because without networking, Kubernetes doesn't start really worth very much as it. And I'm installing Calico. Calico is encapsulated so you actually launch Calico by running it as a pod, a daemon set on top of Kubernetes. So that means this Calico daemon, this Calico pod gets launched on every node in the cluster. And so when I do this, you'll see the Calico node pods get launched. And if you look at the watch, you see the Calico XED, you can see the Calico node launch. And pretty soon, I think pretty much most of the pods that are pending will go into a running state. There are no schedulable nodes. So now that that is done, you have networking going. What we'll do is we'll run this command on the worker nodes. And let's start by doing it on worker 1. And when I do that, you'll see additional Calico node processes launch in here. And in fact, we can also do it from here. So we can launch both workers at the same time. And hopefully, if things go well, and pretty soon you'll see a third one as well, it tells you that these are running in the host namespace. And so, guess what? Suddenly you have Calico nodes being created, and now everything in the cluster is running. Was anyone timing me? Was that two minutes, five minutes? You have your Kubernetes cluster up with networking. So next step, let's launch some applications on it. And to give you an idea how you do that, let's go back to the master. And let's do everyone's favorite application, nginx. Actually, before we do that, let's create a namespace. Let's call it policy. And let's try and I cannot type today for some reason. How many replicas do we want? Let's say 10. And so now you see this nginx pods being spun up in the IP address through Calico. And voila, we are up and running. So now if you want to look at what the actual network infrastructure looks like, let's go to the worker node. And let's do an IP address show. And you see all of these interfaces, Cali, whatever, those essentially are interfaces virtually the net's connecting to the individual pods. Each of them have an IP address. And each of them connect to the nginx pods. If you do an IP show, there are your routes. So for all the pods running on that particular node, here are the slash 32 routes pointing to the virtual Ethernet. For the route, for the nginx pods running on a different node, here is the routes that have been advertised by BGP with an aggregated slash 26 route. So for any reason your connectivity fails, guess what? You just look at your routing table as if not go trouble should start at BGP. If the route exists but you don't have connectivity, you would do things like look at the IP tables to see if there's any policy that's preventing traffic from being stopped. But networking in Kubernetes, that's all it is. You have really powerful networking. This networking scales to, the current scale targets for Kubernetes are up to 5,000 nodes. It used to be 1,000 nodes, 100,000 containers. Now it's 5,000 nodes. I believe 5 million containers. I forget what the number of containers was. But it works. It's simple. Right? And that's sort of an illustration of why Kubernetes networking is, the way it was designed is designed to keep things simple and yet scalable. And I want to sort of leave you with a little bit more on what's coming tomorrow, which is a bit more of the advanced concepts. Right? And so tomorrow at the session where I'm joined by Canonical and AT&T, we'll talk to you through a little bit more of the use cases around how you want to combine Kubernetes and OpenStack together. And there are sort of three scenarios for that. Scenario one is where you're running Kubernetes on bare metal and you have clusters of Kubernetes and you have OpenStack also running on bare metal. But you have applications that need to talk to each other. You might have an LDAP server on one and LDAP clients on the other. Right? And there's a lot to do that. There's different approaches. The Calico approach is to do simple networking, BGP peering essentially, and using labels and policy to define what can talk to what. The second sort of scenario is when you're running OpenStack on bare metal and you have individual virtual machines running Kubernetes nodes. So Kubernetes is in effect running inside of OpenStack. And again, in the case of Calico, which we'll talk about more, it's simple BGP peering, simple networking, and you use policy. And the third scenario is actually a really interesting scenario, which you're seeing more and more in the industry now, which is Kubernetes is running on bare metal, takes care of things like the auto scaling, the provisioning, the upgrades, the life cycle management, and OpenStack is running as a containerized application on top of Kubernetes. In this case, what the benefit is that the OpenStack control plane is containerized, and so as you need more capacity in auto scale, also Kubernetes can take care of things like upgrades, rolling upgrades, as you need to move from OpenStack version to version. And this is sort of the use, this is the demonstration that AT&T's been showing, which is using OpenStack help as the OpenStack project to do this, and they actually use Calico for the networking fabric. But what they do, what they're actually showing is being able to do a deployment of OpenStack as a containerized application of Kubernetes, and then upgrade it, and being able to do that within the matter of a few minutes, which is a really powerful concept. And the inner CalicoBGP peers without a CalicoBGP and a simple IP routing. And you use policy to isolate between applications. So that's something we'll talk about tomorrow. So sort of to wrap up the session here and to take any questions, the things I want to sort of emphasize is that when you start looking at Neutron and Kubernetes, these are different abstractions, but they do have sort of different targets that they did have different targets in mind when they were designed. Neutron was sort of designed for virtual machine scale. And so it can handle 5, 10 VMs a second, depending on how complex your Neutron environment is. Kubernetes is designed for container scale. It's designed about launching hundreds, potentially thousands of pods a second. And so it's a different sort of scale target. So when you start connecting Neutron and Kubernetes, you have to give a little bit of thought to what you're connecting. Are you sort of overloading the complexity of one into the other when you're connecting them? And you really want to give some design thought into that before you do anything. So that's one sort of key highlight I want to point out. The second thing is that Kubernetes abstractions are different from Neutron's networking abstractions. And that's intentionally so because Kubernetes is focused in this world of microservices that we live in, which is a much more rapidly changing world than the world that OpenStack and Neutron were originally designed for, which was virtual machine networking. So the abstractions are quite different. Certainly Neutron is slowly evolved, but I think as it's evolved it's also a little bit of complexity. And there needs to be some focus, especially for people deploying and operating networking at scale to focus on the operations part of it. Because there's key elements from an operational scale perspective. And finally this is a key concept in Kubernetes networking, which is it gives you the option of keeping networking simple and using policy isolation. And I follow that principle, it gives you the option. Now, absolutely you can implement your networking in any fashion that works for you. But it also gives you the ability to keep your networking configuration simple and actually use a declarative model like network policy to provide the isolation, which is a really powerful concept. And this is a concept that's sort of been embraced by all of the other major content orchestrators. They're all moving in this direction that you haven't already. And so this is really an opportunity, it's an inflection point for us to all think about how we've deployed networking in the past in the early days of cloud networking. And to sort of rethink what we can do moving forward. How we can fundamentally simplify and focus on scaling networking to much bigger infrastructures. So that said, if there are any questions, I'm happy to take some questions now. I'm not sure how much time I have. Looks like I have about five minutes here. Yeah. Question. Do all the containers in the pod share the same network name space? No. So a pod is a collection of containers and essentially the containers within a pod share a network name space. Only the network name space, but processes file name spaces are separated. Again, generally that's a name space that's created by a tubelet. I couldn't speak to all of the different name spaces, but at least as far as network name space, which I'm pretty familiar with, that's what happens. And the way into how it's created, there's something called the pause container that gets launched. And the only job of a pause container is to keep that name space alive. So it's a very simple way to get a name space going. If you look at what's running on a node, there's something called a PAUSE pause. And all that's doing is creating a network name space. Thank you. You're welcome. At the time you're doing the kubectl apply calico.yml, is there some baseline network already running or is it just the management plane that is used for all operations? Just the management plane, obviously all the nodes are also connected, but at this point the master doesn't know about any of the nodes. All you're doing when you do a kubectl apply calico.yml is you're creating a demon set that says any time a new node comes up, launch the calico pod on that node. So it essentially preps your Kubernetes cluster for networking. And so the first time a worker node joins the cluster, the calico node launches on that cluster node and you have networking automatically. Okay. I saw that you mentioned something about solutions with Kubernetes like calico, flannel, etc. There's numerous, I mean there's Contrail, there's Contif, there's dozens of plugins out there. So I have a similar question regarding the network policies. Say for example in an enterprise data center people are moving from IP tables to more advanced or modern policy engines like Illumio. So does Kubernetes have plugins for policy engines as well as of right now? Yeah, good question. So today policy is enforced at typically the inverse point or the egress point of the which we are looking at into the pods. And it's really a function of the network plugin to implement policy. So calico has sort of a policy engine that was the team behind calico helped develop network policy of Kubernetes. So it's sort of Kubernetes policies are subset of calico. That said, there are different policy implementations. Some of them are closed source, some of them are sort of learning approaches. So for example, you look at things like Illumio, they tend to be learning approaches, some of them are closed source, some of them are open source. It's really a function of the network plugin. The thing I will point out is in the world of microservices, especially towards things like functions as a service and server less. The model where you try and learn something in the network doesn't necessarily scale because things are too transient. And so most of the container orchestrators tend to have this model of declarative syntax where you actually declare what you want the application to do and then enforce that policy. So things like calico sort of follow that model. Thank you. Hello. My question is about the ingress filtering. You showed an example where we're filtering on host, which is a source host, which is a really common case. How flexible is this part of the stack? Can it filter on some other aspect of the packet? So let me distinguish between policy and interest. So interest, when I show you the engine X example there, engine X is the interest controller and you have the interest resource defined that calls out, hey, for this mapping you need to do this. That function, so that's sort of the concept of interest. Now, network policy is sort of a distinct concept from interest and in network policy today Kubernetes supports just interest filtering. So what that means is that there is no interest filtering to the world. You cannot express in Kubernetes today how to protect the rest of the world from the pod. So there is no ingress filtering yet in Kubernetes. If you look at specific network policy implementations like calico, for example calico does have ingress filtering. It also has more operational focused policy. But Kubernetes itself is sort of a developer focused platform. And so the community decided that ingress filtering first. So ingress filtering has been deferred for later. It's possible it might come. There's been discussions within the Kubernetes sign network community, but potentially looking at ingress filtering as well. Good question, by the way. Great. Thank you so much for your questions. I know the next discussion is going to be interesting as well, which is going to be talking about kind of combining Kubernetes and OpenStack and four and a half dozen ways you can do that. Some simple, but also many of them tend to be very complex. And I'd also give a plug for joining the session tomorrow that I'll be doing jointly with Canonical and AT&T, which is sort of talking about the same concept, but purely from a networking perspective. Thank you so much. Cheers.