 We're good to go. Hi, everyone. I am Rudra Roke. I'm part of the engineering team at Contrail. And today, I'll be talking about Kubernetes and OpenStack, and in particular, how OpenContrail networking helps solve the container pod bare metal solution in an OpenStack environment. So the environment today, where you have a Kubernetes cluster, as we saw in the keynote today this morning, there was a Kubernetes cluster that was started on an OpenStack environment. So we have a bunch of VMs coming up for Kubernetes. And you have an OpenStack cluster. The Kubernetes cluster has its own network plugin, either through the CNI interface or the KubeLit plugin model. Then you have OpenStack with its own networking model through Neutron plugins and essentially also supporting bare metal in the OpenStack environment. So you have these hybrid networking environments sitting in a single cluster. And they're not really able to talk to each other. They're really isolated in terms of management and overlay and nesting of the network. The only way you can get from an OpenStack VM environment to the Kubernetes cluster is typically through some sort of a gateway mechanism. And hence, the whole issue of securing these makes it quite complex. To this end, the Contrail solution provides a single network controller for your hybrid workloads. So whether you have OpenStack VMs running or you have a Kubernetes cluster in an OpenStack environment, the pods in that environment, as well as bare metal servers connected to this environment, all of them can talk to each other with a single network controller, which is the OpenContrail controller. Since we are running Kubernetes in VMs, you could potentially have forwarding happening in the kernel module of those VMs. In addition, there's the servers with their own forwarding module through Neutron plugin. In such nested environments, you're really living in an isolated world. And OpenContrail solution solves by bridging these two together through subinterfaces. The other benefit that we get is you can actually create a network which can span across Kubernetes, OpenStack, and bare metal. So we can have a virtual network where you can have a pod, a virtual machine from OpenStack, and a bare metal server all talking to each other as though they belong to one single network domain. Because of this seamless implementation with OpenContrail, you are able to provide better security policies. You're able to control how your apps can talk to each other. And finally, the single pane of glass gives you better command control, better API. Also helps you provide monitoring and analytics, essentially, to troubleshoot this complex hybrid workload environment. So let me start with just key components of OpenContrail, which most of you must be familiar with. We have the notion of virtual networks. In this example, it's red and green. These are two isolated domains. Any workload you launch in each of these domains cannot talk to each other unless you create a policy and you connect the networks to each other. Once you connect the networks based on a protocol port saying these two networks can only talk on this protocol, this port, you can even insert a service saying green network can talk to red network through a firewall service, which could be provided by a virtual machine or a container-based firewall. So this network policy is an abstraction that allows for connectivity between virtual networks. Contrail also has the notion of interfacing with physical gateways. Unlike the need for instantiating a software gateway, Contrail can directly peer with physical gateways such as MX and connect your overlay or virtual network to the physical world. We also have layer four load balancers, which are ECMP-based load balancers. So you don't really need to terminate your TCP sessions. You can instantiate a load balancer, and it will do a five-touple-based hash and distribute the traffic. In addition, we do full proxy-based HTTP load balancing as well. So if you look at the overall architecture for Contrail, Contrail as an assistant solution can work with many orchestrators and, of course, OpenStack being one. We have homegrown orchestrators with some of our customers. It works with Kubernetes, works with vSphere. The Contrail controller listens to any network object messages and then goes ahead and instantiates those objects either in a physical gateway or in top of the rack environment or to a bunch of server nodes through XMPB connection. So what I'm getting towards the second half is a demo, basically. And this is kind of what the building blocks for the demo are. So we have two networks. We have a green network and we have a red network. There's a bunch of pods that have been spawned on the green network. And the red network has other Kubernetes pods as well as some virtual machines and some bare-medal servers. The green network can talk to the red network only through a service chain in this example. And the service chain could actually have multiple services in the chain. It could have a load balancer. It could have a firewall. And the load balancer members can be pod members, bare-medal members, or virtual machine members. The red network can go out to the internet by peering with an MX gateway. So that way you have internet connectivity as well for the red network. So what is the Kubernetes deployment model typically used in OpenStack? As you saw in the keynote today, you launch a bunch of VMs. So in this case, you see the three virtual machines which are launched. And each of them is provided a Kubernetes role. One of them is a Kubernetes master. And the other ones are Kubernetes nodes. And nodes are where the Kubernetes pods get scheduled. And that's where they're on. So pods could equate to one container or multiple containers in one environment. To support a contrail in a Kubernetes environment, we've added a couple of things. There's something called as OpenContrail Cube Network Manager in the master. We listen to all API and scheduling messages from the Kubernetes API server. And based on that, we create all the contrail resources that are required to create a virtual machine, a virtual machine interface, what network should it get plugged into. And then on the node, we have added our plug-in. In the past, if you went to Kubernetes 1.1, we used a Cubelet plug-in model. But now with 1.4 Kubernetes, we are moving to the standard-based CNI plug-in model. So on each of the nodes, contrail plug-in will get launched whenever a pod gets scheduled on that particular node. I'll come back to this later, but this is essentially talking about how a bare-metal server gets connected to the whole overlay model through an EVPN mechanism. So as part of the demo, we can see that there's L2 MAC address-based routes which are pushed to the QFX, which is atop of the rack. And we have a VXLAN endpoint from our V router going into the QFX. This is kind of our demo setup. And we have two compute nodes. So these are two OpenStack compute nodes. We have one control node. What you see in blue is a Kubernetes cluster. So we've launched three VMs, which form the Kubernetes cluster. There's a master, and then there are two nodes. The contrail Vrouter kernel module, which is responsible for all the forwarding, it's running in the Kubernetes node VMs, as well as it's running underneath on the OpenStack servers. So if you notice the two compute nodes, you'll see that two levels of V routers are there. Although you look at it, and it seems like it's a nested model, but we are plugging in subinterfaces directly from the upper level Vrouter to the Vrouter on the server and actually eliminating and just bridging the traffic through. And the forwarding and all the route information is sitting only on the lower kernel module. Coming back, just to give an example here, we have two green pods, two red pods, which are all on the Kubernetes cluster. We have a green VM, which is actually in, it's an OpenStack VM. And then we have a bare metal server, which is sitting in your data center, but it's pulled in into the red network. So you have two pods in the red network and a bare metal server. We have two green pods in the Kubernetes environment. And we have a network policy to make sure that the green network and the red network can talk to each other. So the end of the demo, hopefully, I mean, we have a recorded demo. We'll see that everybody is able to talk to each other, whether it's bare metal server, a pod running in a Kubernetes cluster, or a VM in OpenStack environment. So I'm going to switch to the video. Is there any? Yeah, is it switching? You can't switch the video. Looks like we're not able to switch to the video here. Is there any way? All right, that's good. OK. So here we have a Kubernetes cluster, which has been created. So we've launched three VMs. And they are running on the two compute nodes that we have, compute node one and compute node two. There's a master and two minions. They have been launched on a base network, 190 to 168.1 subnet. So we have these three VMs that are running and creating a Kubernetes cluster. Now, on the control side of things, we can see that we have two compute nodes which are represented by virtual routers. And we have roles such as analytics, database, config, and control to manage, act as the controller for control. Looking at compute node two, we can see one of the minion VMs is scheduled there. And the other compute node will have the Kubernetes master and the second minion scheduled there. We have three VMs which are running our Kubernetes cluster. Now, if you go into the master, we can actually see that the minions are ready and good to use. Just to ensure, we will check the port connectivity between the master and the Kubernetes nodes. So Kubernetes master listens on port 8080 for connections from the minions. And the two minions are connected to 190 to 168.1.3, which is the master. In this case, now we're going to launch a pod in the green network. So we have, as I mentioned earlier, we have two networks, green and red network. Sorry, we are launching the pod in red network. And what happens at this point is it gets an IP address from the red subnet, which is available even for your OpenStack VM networking. So 10.10.1 is our red subnet, and 10.10.2 is our green subnet. And we can see that when you launch the pod, we have a networking interface which has been created in Contrail, which is 10.114 is the IP address of the pod. Now, the important thing to notice here is it's also been plumbed into the lower vRouter. So as I was talking about nesting of vRouters, the plumbing has also happened on the lower vRouter through a VLAN subinterface. Now, the next step is to look at the Docker containers. It's all running. Now, we'll go ahead and launch another pod in the green network. So the red pod is running in the red network. So now, these two pods are essentially in two isolated networks. So they should not be able to talk to each other. One has a subnet 10.10.1 slash 24. The other one is 10.10.2. They are completely isolated pods within the Kubernetes VM. We can see here that both these pods are connected and the MAC addresses have been plugged into the MAC VLAN driver of the Kubernetes VM. Now, we're going to try to ping from the green network to the red network. And it makes sense, since these two networks are fully isolated domains, they shouldn't be able to talk to each other. Contrail network policy is the fundamental construct to allow traffic between networks. So we're going to create a policy saying, any traffic going from red to green network on any of the pods, we are allowing bi-directional traffic. Now, once we create this policy, we need to attach this policy to both the green and the red networks. And as soon as we attach the policy, we can notice that the traffic will actually start flowing between the two pods on the Kubernetes cluster. So this is something that's just working on the Kubernetes VMs. Now, to bring it down one level, the next phase is to essentially try whether these pods can actually talk to Kubernetes to an OpenStack VM. Now, this is all our UI where you can see how the policy connects different virtual networks. We have red and green connected through the red, green allow policy here. The packet in and out count, we have all the analytics. As I had mentioned earlier, there's a concept of subinterfaces to eliminate the nesting of V routers. And you can see that here. The next example is, how do you connect these Kubernetes pods to a VM in OpenStack? To go through that, we'll go ahead as step one, find if there are any NOVA VMs running. We go ahead and launch a NOVA VM in the green network. So essentially, because we have already created a policy to connect red and green networks, any pod, whether it's in the red network or the green network, should be able to reach this virtual machine. So we got an IP address from the 10.10.2 subnet in this case. And we'll connect to that VM through the link local interface. And it has the IP address, as I said, from the green network, which now we want to try to launch a pod in the red network and try to see if the pod is able to talk to the virtual machine. So here we're going to launch another pod. We already have two pods which are talking to each other. But for talking to the VM, we'll go ahead and launch one more pod. So in this nested environment, the traffic is going through the subinterface. So this is a red pod, which is going to be able to talk to your green VM. So a 10.10.1 network is actually talking to 10.10.2 OpenStack VM. You can see that this is the OpenStack VM. And then we had all the pods which are running inside these VMs, whether they were red or green pods. And then we have actually a VM in the green network on the OpenStack cluster directly. But they are all sitting in the same network and or they're connected through policy and they're able to talk to each other seamlessly with a single SDN controller, single SDN networking notion. So again, we have a few analytics pieces of information that you can look into. You can look into stats and other things. But let me quickly jump to, we have only about a minute left. The last example that I wanted to show was how these pods can also talk to a bare-medal server sitting somewhere in your data center. The bare-medal server is connected to, so this is a bare-medal server VM that we have pod that we're going to try to talk to. The bare-medal server is sitting in a network which has pushed a MAC address to our QFX. And then the QFX also has another arm into the open-contrail networks through the VX LAN tunnel. So essentially, we have extended the whole network into the bare-medal as well. And now what we are trying to do here is trying to ping from the bare-medal into one of the pods, which is 10.10.26, which is a green pod in the Kubernetes cluster. This is essentially trying to show the MAC address information that is being shared in terms of the next hop. I would like to end the demo with, since we are a little short on time, essentially we have the Kubernetes minion with one of the pods here. We have a virtual machine in green network, and we have a bare-medal in the red network. And we are pinging from the pod the bare-medal server, from the bare-medal server, one of the pods, and from the green VM, another pod. So you have complete hybrid workloads running in OpenStack VMs, bare-medals, as well as Kubernetes pods. And all of them are able to talk to each other just by the creation of two networks in contrail. And the overlay is connecting everything in our case. So with that, I would like to thank everyone. OK. Yeah, east-west, yes. Sorry, we can, right? I mean, you can create more networks to do microsegmentation of any kind. So you could do, in Kubernetes, we could deploy an app-level segmentation, and that would actually essentially isolate everything based on services within a Kubernetes app, as well, yes. We can do layer 3 connectivity with a VPN connection that can be achieved, yes, across data centers. I don't believe we need to do that, but VXLan, in our case, is used only for bare-medal reachability. Outside of that, we use other mechanisms, mostly layer 3. So in our network policy, you can have two networks, and you can insert any service in between, be it security, a VM which gives you security. You can have a VM which gives you load balancer services. We can service chain, yes. And you can even chain a bunch of services between two networks, yeah.