 Hi everyone, I'm part of the OpenContrail team and today I'll be talking about our Kubernetes and OpenShift integration for Contrail Networking. And towards the end, we also will be talking about AppFormix, so Travis will give you an overview about that. This is a general agenda for this topic today. Let me just give you a quick overview about Contrail. We are an SDN solution for OpenStack and currently we are also working on new container technologies such as Mesos, OpenShift and Kubernetes and providing an SDN solution for these containerization technologies. I'll also talk about a few things that we have done in terms of mapping Contrail resources to Kubernetes and the last thing is the new thing about Helm deployment that Contrail has used for both OpenStack and OpenContrail. To begin with, with the latest release, 4.0 Contrail, we have containerized our whole control plane, which means previously we were running all these processes on bare metal servers, but now all these are Docker containers as part of the new release. There are 4 containers, 3 controller containers, which means controller running, configuration and control node. There's an analytics container and an analytics DB container, which is our Cassandra database for storing all the stats and UVs. In addition, we have a Vrouter agent container, which runs on every compute node. So if you have a Kubernetes cluster with Kubernetes master will run the controller components and all the Kubernetes slave or minion nodes will run our Vrouter agent containers. In addition, we also have a kernel module, which is the Vrouter kernel module to handle our forwarding. Now, each of these containers, we have many ways to deploy, of course, Ansible, Puppet and so on, but the new thing that we are looking at for this release is Helm based deployment, which I'll show you a demo towards the end. Now, how does Contrail integrate? So what we've done is we've taken all our existing control plane and added some new components to interface with Kubernetes. Fundamentally, all the constructs that we always had in Contrail, we could leverage for any of the new container orchestrators, but some of the new components that we've added is to interface with Kubernetes components and it's the same in case of OpenShift as well. So in this case, what you see is a current Kubernetes deployment. You have the Kubernetes master running on top and then there are two minions of compute nodes in this case, which will handle scheduling of the pods. Now, Contrail comes in with a new component called ContrailCubeManager and the idea behind this is that it interfaces with Kubernetes API server. So everything in Kubernetes is backended by HCD and then you can watch the stream of events. So let's say a pod is getting scheduled on a particular compute node. Now that event, ContrailCubeManager is continuously watching that streaming API and based on that, it takes actions. Now the actions are allocating an IP address to that particular pod, finding out which compute node it's scheduled on, so creating the linkages for that pod to that particular compute node and so on. So the complete allocation of IP address is central and hence most of the job is done by ContrailCubeManager. There's another component that we have which is our CNI plugin. So we have open sourced a Golang based CNI binary for Contrail. So that's the container networking interface required to provide finally the IP address to the pod that is scheduled on a particular compute node. So if you notice at the bottom, the CNI plugin which is a Contrail CNI plugin will receive, will get invoked whenever the pod gets scheduled on that particular compute node. And that CNI plugin talks to our Contrail agent to retrieve the IP address that was centrally allocated. So it does that one thing and returns that to the cubelet component. In addition, it also plugs the vEath interface for the pod into the vRouter. And once that interface is plugged into the vRouter, all the forwarding that we have today in Contrail with our encapsulation and so on, we can leverage everything in terms of the basic forwarding paths. In addition, you can also leverage Contrail for all the additional features that we have in terms of network policy and so on. Now the first thing we had to figure out when, I mean that is a workflow that I just showed you but in terms of resource mapping, what you see on the left is what are all the resources that exist in Kubernetes. An important thing to see here is that there's no notion of a network resource in Kubernetes. So fundamentally, Kubernetes, when you deploy a cluster, it has three IP ranges, one IP range for allocating to pods, one IP range to allocate to services, and finally the last IP range if you need public access. So what we've had to do is figure out different mechanisms how we can create networks underneath a Kubernetes cluster. But before I jump into that, let's look at the current mapping of all the resources. Namespace is a notion of a virtual cluster in a Kubernetes cluster which we mapped to, you know, you could think of it as a Keystone project. It could be shared for all namespaces or each namespace can get its own project depending on configuration or if it's OpenShift or Kubernetes deployment. A pod is something like a virtual machine, has an interface, gets an IP address, MAC address, and finally has a plug into our VRouter kernel module in terms of the interface. Service is a concept of load balancing a whole bunch of pods. So if you launch 20 web servers and you want to give only a virtual IP address, you would instantiate a Kubernetes service which is backended by these list of pods for load balancing. Contrail has added our native load balancer to support this service, which means there is no proxying as such your ECMPing traffic to all those back-end pods. Ingress is when you're entering your Kubernetes cluster from outside to inside, in which case based on URL is one of the examples. You can say URL A goes to service 1 and URL B goes to service 2. We instantiate our HA proxy based load balancer. And the last thing is network policy which is currently in beta and will become GA I guess in 1.7 Kubernetes. We have implemented that using security groups. It's to control further isolation within a namespace, a group of pods talking to another group of pods and what are the controls in terms of pods and whether that certain whitelist is allowed or not. So Kubernetes by default works in, these are the isolation types and if you look at the top of the, the top of this diagram basically shows what is the default mode for Kubernetes. It's a flat networking model, there's no isolation. When you launch a pod, all pods can talk to each other, all services are reachable across. And this is something that we support by creating a single virtual network in Contrail where all IP addresses are allocated from a certain range and everybody can talk to each other. There's a need to actually create further isolation based on namespaces in Kubernetes which are equivalent to virtual clusters. So what we've added is a special annotation in the namespace resource. When you say openContrail.org slash isolation equal to true, we would create a namespace with its own virtual network. And what happens in that case is since every namespace which is isolated has its own virtual network, the pods launched in those respective namespaces cannot talk to each other. So you don't have to drop down to actually using directly network policies to isolate. You can do this at a much higher level by creating these namespaces in isolated mode and you're automatically preventing them from talking to each other. If you want to go further, even reduce in terms of isolation, you could actually use network policy for pods within the namespace. This is an example of ingress that I mentioned earlier which we've implemented using HAProxy. So in this case when you are trying to come into a URL with an external IP and your URL is XYZ.com going to slash dev, it is automatically filtered based on our HAProxy routing into service which is dev web service or a QA web service and so on. So we have a single service or simple fan out name based load balancing available for ingress. And on the back end when you go to service we actually use ECMP load balancing. So here we are in fact using two levels of load by contrary load balancing. Now coming to deployment, I mentioned earlier that we do support many different deployment methods. What we are going to look at here is how we are using Helm to deploy Contrail. So step one, let me go to the next slide and come back actually. So the basic idea here is you have your bare metal cluster and this is a new thing that's happening in terms of deployment where Kubernetes is also becoming kind of something to manage your servers and all the control nodes. Step one is to go ahead and deploy Kubernetes on the bare metal servers. And after that we use Helm charts to deploy open Contrail pods. So in fact all of the Contrail components land up as pods in Kubernetes. In addition we also use Helm charts for open Contrail to deploy open stack pods and of course rolling upgrades and other features are also supported as part of these charts. So once you have the first two layers done, at that point you can actually create a, you can launch a whole bunch of VMs, you can take a set of VMs and create another Kubernetes cluster which is running in virtual machines. You can create an open shift cluster or you can have your standard virtual machines which are running on top of open stack. So all the blue VMs that you see you can decide to create your clusters as per your needs. The one component that I wanted to show on the right was Contrail controller. In this case you only need a single controller whether you're managing for your SDN needs for the nested Kubernetes cluster, for your open shift cluster and even for your open stack cluster. Single Contrail controller with the right credentials can manage every single SDN solution. So now let me just go back. So this is the deployment. Step one is we have deployed Kubernetes in this scenario. The next thing is we launch all the Kubernetes pods. So you have Contrail controller analytics analytics DB and we also need Contrail cube manager and Contrail vRouter agent is running as a demon set, Kubernetes demon set which means it gets deployed on every single compute node. So if you add another compute node or remove compute nodes because it's a demon set it'll automatically get scheduled and launched. The next thing is to go ahead and use an open stack Helm chart and that would bring up all of open stack components as pods. So at this point underline Kubernetes is actually making sure that all your control components be it Contrail or be it open stack are running and if there's any round time they will launch relaunch those pods and completely manage. So if slave one were to go down Kubernetes will ensure that all those pods will get scheduled on another compute node. So it provides resiliency and all that is taken care of by Kubernetes. So let me just go through a quick demo. We have the deployment already. You have to escape out of full screen. I don't know if you can see this clearly but so this is we have all the nodes here for this is our Kubernetes cluster with four nodes. I'm using the command Helm Ls so Helm and tiller are a combination to actually manage your Helm charts and in this case we have deployed Contrail as pods and you can see that all the pods in Kube system we have one controller pod which is actually running multiple containers all our controller containers and then we have the agent pods running on three compute nodes. It's the same thing in terms of the number of nodes but what I'm trying to show you is we have created another name space called open stack and open stack has all these services deployed via Helm. In addition what you can see is that all the open stack components are running as pods which were deployed by Helm. So all these pods have front end services so instead of accessing the pods directly they expose a service IP as I showed earlier and in this case you can see that we have the horizon service IP and that's the service IP that you would connect to. I guess we've already connected to that to get to horizon. So everything is being managed by Kubernetes both Contrail as well as open stack and that's what we wanted to show. We're running a little late I'll switch over to Travis. Thanks Rudra. So I have only a couple minutes but I wanted to share with you about AppFormix. AppFormix is a company that Juniper acquired back in December. It's a cloud operations product for managing both open stack Kubernetes and public clouds. Today I'm going to show you a little bit of the Kubernetes integration that we've done. AppFormix is designed to be extensible so that we can integrate with various cloud management platforms like OpenStack, Kubernetes, VMware, AWS. And what we do is we bring in the data and give you a visibility across all the layers of your environment. From the physical layer, the virtualization layer be it KBM or Docker and up to applications and services that are running on top of the infrastructure. At the top level you can get a snapshot of the entire infrastructure. We see here the worker nodes in this Kubernetes environment, the number of containers, the number of pods and a real time status of how these elements are meeting the SLA that has been configured by the user. So the user can set policy to automatically monitor and AppFormix does monitoring in a distributed fashion so that it scales with the size of the infrastructure. Policy is actually evaluated at the edge where the workloads are running so that we can do it in real time and send signals forward anytime we detect a condition that's violating the policy. Here it's easy to navigate across these layers. I can look at a single host, a single node and get a visibility into all the containers that are running on it. A snapshot is streamed of the resources on that host. I see all the containers, whether they're meeting the SLA in this case, one of them is not running. Therefore we're marking that it's missing its heartbeat. We can see which pod that container belongs to as well as the namespace or the service. So you can then cut across these layers. I'm at the physical layer. I can move up to the layer of a pod if I want to understand what's happening inside of a single pod. This is the elements or I could go at a higher level, a namespace, an entire namespace or a single replication controller. These are various aspects of which I can cut across the elements inside the infrastructure. Now, of course, just inventory is not useful. You also want to understand actual metrics. Although the agent that's running on the worker node is evaluating policy and collecting these metrics and evaluating them in real time, we do stream forward a summary of data for history so that you can interact as a user and see what's happening in the past over time on the host level as well as at the container level. So here you can see the memory usage on the host, the memory usage by each container, and we can then also create policy on top of that. So I can add a number of rules to monitor the containers and the hosts. And when I do that, it's a policy-based rule so I can select a subset of my hosts or I can select a subset of my containers based on the various groupings in Kubernetes, be it a namespace, replication controller service or pod level. If I want to monitor the Redis service, I can have a certain policy that will be in place. No matter how many containers are present inside that replication controller, they're all going to get monitored with the same policy as they dynamically scale up or scale down. FormX will push that policy out to the worker nodes so that they'll get monitored in real time. So that's a very brief overview of kind of what FormX can do, a bit of the monitoring aspect. It also does have some additional features in terms of reporting so that you can look at resource uses over time, figure out if you have the right types of resources or if you need to grow your infrastructure. And I'd be happy to talk with anyone about that in more detail at the Juniper booth. Thank you very much.