 Hi everyone, I'm Rudra Roke, part of the Contrail team and today I'm here to present a couple of new things that we've introduced in the open Contrail solution. The first one is service chaining version 2, it's a new way to do physical, virtual and container-based services and just the concept of container networking, we are working on Kubernetes, Mesos and any other solutions and how to plug in OpenContrail in all the solutions. So to begin with I'll just give a recap of what OpenContrail solution looks like. We have a Contrail controller in HA mode wherein we run our config node, we serve REST APIs, as well as complete BGP-based control mechanism to manage a set of compute nodes. So here you can see that we have a couple of hypervisors, this could be Linux hypervisors or ESX as a compute, any of those hypervisors we currently support. We have a kernel module which is the V-Router and that runs on all the hypervisors which are controlled by the Contrail controller. You could have any of the different orchestrators, OpenStack being one of them. We also in the OpenContrail solution support managing or interfacing with actual gateways. The gateways could come from any third-party network vendors. So you could plug in your compute node and actually go out to the internet directly through the gateway unlike many other solutions where you would need a software gateway in the middle. So there's no single point of failure in terms of software gateway here. There's two levels which is the common framework for anything in the overlay world. You have the physical infrastructure and then we have the complete overlay networking. Here in this example everything that we create has a concept of fundamental concept of virtual network. So think of various virtual networks and in OpenContrail you can only connect them through network policies. So the basic advantage is that within the network any of the VMs can talk to each other in the example here being you have a green virtual network, a blue virtual network and a yellow network. They can talk to all the VMs within the network and talk to each other but as you want to go from one network to another network you need to connect them using a policy. Now that policy is kind of different than security groups where you instantiate the security group on a particular VM. In case of policy you apply that to a full virtual network so it's automatically inherited to any VM that is launched in that virtual network. By default everybody in the green network can talk to each other but when they want to go to blue we have a policy saying only HTTP traffic can go through and that policy would trigger in for everything. The important concept that I'm going to continue on is the service chaining. So if you notice there's a blue network and the yellow network we have forced by policy that any traffic between the blue and yellow network has to go through a firewall in this case could be any other service and it's a pluggable module so it could be any third-party virtual service whether it's physical, virtual or container-based. So essentially here we have a VSRX, a virtual SRX that is sitting between and ensure you can apply firewall rules on that VSRX itself. The improvements that we have done in v2 but before I jump on to v2 which we've just released I wanted to talk about v1. In v1 when you instantiate a virtual instance open-contrail would go ahead and manage the life cycle of the virtual machine. So if you have a firewall, a load balancer VM or any other VM which offers some kind of service the open-contrail service would actually launch the VM, in fact scale out the VMs based on your need for throughput is higher. So in that case we would be the bottleneck in some sense in various ways. One thing if you have many NOVA attributes such as availability zones, force-hosts or even cloud-init scripts that you want to pass on to the VM we wouldn't be able to do that that easily. We would always have to catch up to the orchestrator solutions. The second thing was we could not leverage so much happening in the open-stack community in terms of heat so we have kind of introduced a new concept called service chaining v2. So as opposed to managing a VM what we are doing is now managing ports. So abstracting this one level away from a virtual machine to ports gives us in fact uses a massive advantage because now you're abstracting the ports from whether it's attached to a virtual machine, whether it's attached to a container v-th port or whether it's attached to a physical appliance. In all cases there is a concept of logical port that exists and if you create a port tuple is the term that we're using as part of v2 and use that port tuple to create your service. So if I'm saying virtual network 1 to virtual network 2 always goes through this port tuple. Now what that port tuple is attached to is another level of abstraction and that gives us tremendous advantage of leveraging everything that the VM orchestrator actually offers. So in case of NOVA you can pass cloud-init scripts or anything else that NOVA can do. Similarly since the user is actually responsible for launching the VMs you could use heat templates to launch the VM and that gives you things like basically scaling out. So heat has this automatic scaling based on feedback from Cilometer. If you have height if the throughput is going low CPU utilization is high heat would actually autoscale the VMs as in launch many instances of that particular service. So your firewall could automatically scale out based on Cilometer analytics feedback and so those are the kind of advantages that you have by binding to a port as opposed to actually managing the lifecycle of the VM. Now heat is in effect managing the lifecycle of the VM. We have few other advantages here in terms of what we are offering is you can launch services in active active mode so the traffic is actually distributed. Open Contrail being a full L3 overlay has a notion of ECMP by default in every V router so you're automatically distributing and spraying traffic across a scaled-out model. It also has active standby we use something called route preference and allowed address pairs a combination of those things to actually do active standby of traffic. So you can go through one service chain and if that fails then we actually move on to the other and to do that we actually have a concept of service health check. So this is an interesting concept because as you're going creating these multiple chains what happens in that case is if one of the services one of the firewall instances is not working fine traffic stops going through but you're not able to detect that because the VM is still up and running that offers the service. So in that case we have a health check that you can configure to actually go ping do a URL get to see if the service is actually up and if not then we bring it down we bring the route preference down and the other chain would go active. The other thing that I wanted to talk about other than V2 was what we have the work that we are doing with Kubernetes and with Mesos. I'll focus on Kubernetes today. So as you're aware Kubernetes is the Google open source orchestration and has a concept of services pods containers labels. It offers a flat networking architecture today. There are of course various solutions trying to solve that and open control is one of them. So whenever you're launching a pod this fundamental concept of defining a pod and pod has a name and it offers DNS based service to an outside entity and how to use that pod. There's a concept of labels that we use to associate all the Kubernetes concepts into contrail concepts. So whenever you have a tag as network tag we create a virtual network out of it. Whenever we have a network access tag so let's say you have a service Redis and it's offering network access for another entity. We would create a front-end entity and Redis a network policy as I mentioned earlier everything in contrail is through network policies. So you create two virtual networks for each of the pod slash services and then we automatically use the tag to create a policy between the pods. So what you notice here is fundamentally you moved away from a flat structure to a completely policy-driven isolated structure. These pods cannot talk to each other by default. They can only go through if the policy is enabled and connecting the two. This is an example of how typically you would do a pod definition and the tags would help you to communicate with each other. So what we've had to do is we've added a listener it's called the cube network manager and that's a plug-in which listens to the cube API server and any messages that go on. We listen to those messages and we convert them to open contrail specific messages and program our forwarding engine. So whether it's a new network that needs to be created whenever you launch a new service or pod or whenever you want to connect to pods all the translation happens by the cube network manager. This runs on the master. All the contrail controller demons are containerized in fact. So if the config node, the analytics node, the controller everything has been containerized to run in this environment. On all the minions which means all the compute nodes of Kubernetes we also run a cubelet. We replace the kernel module which handles Docker networking or Linux bridge. We replace that with our vrouter. So again it's an L3 overlay with open contrail. The vEat whenever a pod comes up, the vEat is plugged into our vrouter. So our vrouter is fully aware that a new container has come up and it then talks to the controller, exchanges all the information in terms of IPAM or anything that we need to do. So fundamentally the solution is we also provide ECMP to services within open contrail but overall benefits are you're providing multi-tenancy solution in networking. There's complete isolation of tenant and pod traffic as I mentioned earlier and overlay in open contrail is through the network policy mechanism that we have and it's a seamless integration between private and public clouds also we've set up two Kubernetes clusters to talk through two MX gateways across data centers as well as you can actually go from your data center to Amazon and back through this mechanism. With that I wanted to mention that all the security features that we have in our service chaining Sushant is going to cover that in terms of CSRX and VSRX offerings. Thank you Rudra. My name is Sushant and the product manager for the data center security in the Juniper network team. I'm going to share a couple of very interesting announcements that we made this in the summit. So the first one is about the 100 gig VSRX. For those of you who are not familiar with VSRX let me give you a brief overview. So what we have is the industry's fastest virtual firewall. We do 17 gig large packet or 4 gig iMix with only two VCPUs and 4 gig of memory. We use DPDK internally and then we run we have integrated with OpenStack we have plug-in and driver that's available for free. So what we did is with two VCPUs we are able to achieve 17 gig so we added more VCPUs to the single instance and we saw that the performance was scaling linearly and with just 12 VCPUs we were able to hit 100 gig large packet throughput. Another thing I want to stress here is the host that we are using for this particular test is a two socket Intel Xeon host machine. Each socket had 12 cores. So essentially we were just using one socket to achieve this 100 gig performance and the entire second socket is free to run other workloads or other VNFs on the same host. In terms of performance comparison with the existing VSRX and the new VSRX so as we added more VCPUs the performance scale linearly and not only did the basic firewall throughput increase but even the advanced services throughput has gone 5 to 6 times. So one particular thing that I want to stress upon is the IPsec performance so with single instance now we are able to do 4 gig throughput. The main use cases for such 100 gig firewall is if someone is virtualizing CG firewall then the main requirement there is having carrier-grade NAT and high performance firewall and with 100 gig VSRX we think we have a good use case there. Moving to the other announcement that we have made is the CSRX. So what we have done is we have taken all the features that we support on the VSRX and we moved them into a container and this is what we are calling as container-based SRX. So it's a firewall built in a container this is the industry's first container-based firewall. It has complete security feature parity with the VSRX. It doesn't have routing features but the security features are all there. The codebase is singly sourced from the VSRX codebase so essentially what it means is any fixes that go into the physical SRX get applied into the virtual SRX as well as the container SRX and we retain the same management management layer with the virtual firewall. So if you have built automation or tools around physical firewall you can repurpose them to use virtual firewall as well as the container-based SRX. In terms of value proposition so the main value proposition is the elasticity because there is no static reservation like in case of a VM you can provision more instances on a single host and the resource consumption grows as the traffic to it increases and then it adds greater agility to the environment. The boot up and restart times are under one second so this adds the agility to the customer in environments and then it also adds to the cost savings for the deployment because the customer is no longer required to choose one monolithic application they can choose the services they require and only deploy them. Also you should see that the container is only using resources for the services it's actually running at point of time so the resource consumption is restricted to the services that are enabled on that container which also converts into cost savings for the customer deployments. We have a brief comparison between the VSRX and CSRX so VSRX supports complete routing and firewalling features. In terms of CSRX it is doing the security services and then in terms of CPU requirements VSRX requires two vCPUs statically reserved for it while CSRX would would take up CPU and memory resources as it actually as the traffic to it grows so when there is no traffic in an idle scenario the memory consumption is about 40 to 50 meg and even in terms of the image size you can see that the CSRX image is only 150 MB so it's very easy to download deploy and get started in your environment. I'll talk about couple of use cases where we think that CSRX has a play so the first use case is the cloud CPU use case in this use case the managed security service providers would want to provide security services to a large subscriber base so the requirement there is to instantiate a virtual security instance for each of their subscribers and when the subscriber base is as large as let's say tens of thousands of them the resource requirements to actually provision so many virtual instances is very high and because VSRX consumes less resources and the individual throughput requirement of each of those subscribers is not high you can now provision lot more subscribers on a single host and so this converts to cost savings and the opaque savings for the operator even in the UCP use case where some of the VNFs are running on the customer deployments the the system requirements on the UCP box are also very low because of the low resource requirements by the CSRX and one thing I want to stress in this scenario Juniper can provide end-to-end solution in that control can do the service orchestration and CSRX or the VSRX here can be providing the security services in the second use case I want to talk about is the micro segmentation we have a demo of this at our booth where we can talk in more detail but the idea here is the security groups within the open stack can only provide limited limited security options so to to provide advanced security to the VM workloads we can actually provision a CSRX against each VM workload and the traffic to the workload and outside of that workload will actually go through the CSRX and the user could go to the horizon UI and apply the security group rules and those will convert into policies that get applied on the CSRX so you would actually be able to apply these security policies at the VM workload level instead of on a network level or on a subject level right again we have the integration with open stack and the demo is at the booth we can also do this with contrail where the the QBR now becomes the V router within the contrail environment like I said if with the CSRX if anyone is interested in beta please do reach out if you have other use cases where you think that CSRX would fit better do reach out to us and if you have use cases for the 100 gig VSRX please do reach out to us and please do stop by at our booth to look at the demo for the CSRX and the VSRX that we have thank you everyone thank you