 seems like we are finally a winner. Okay, my name is Hans von Schroeder from a company called Juniper. Who is this company? Everyone knows our number-first competitor, called Cisco. And, well, for Juniper, what our aim is to enable basically unfettered internet, and that's what our mission was 20 years ago. This company has, at that time, 20 years ago, saved the internet from its growth rate, because we were the first company pulling together, going away from the paradigm of the time in software-based forwarding computers with the decomposition of the control plane and ESIC-based forwarding. Nowadays, ESIC-based forwarding is what you will see in every modern router in the world. There's absolutely no discussion about this technology anymore. Now, what's the next pitch after 20 years? Now, Juniper is again at the mission to save clouds from their own growth rates. And that's what Contrail is providing you. But over 20 years, the paradigm changed a little bit. Now, the way we define the interaction between clouds and the outside environment is going to change the way, accordingly, to how the internet is built based on BGP signaling and MPLS as a forwarding technology. And now we bring this technology to the data center where the cloud is hosted, providing you the same level of integration as you can have with existing technologies taken from what is existing today in the internet. So what are we trying to achieve with Contrail approach? We want to provide you a full virtualization of the entire network inside your data center, which means whenever a virtual machine is running, it can be connected to any other virtual machine that is inside your entire data center. And we are not looking at a couple of hosts. We are looking at thousands of hosts. In traditional approaches you will see in a minute you always have boundaries inside your data center. So you cannot connect everyone with any other server on the other side of your data center and make this a seamless experience. And this is what Contrail as a technology will provide is to go away from traditional approaches with providing integration into a VLAN and then stitching VLANs together and going away from software-based neutral routers to a distributed framework to do that. And at the same time, usually you would say, okay, there's a company knowing routing and switching. At the same time, we say, okay, whatever you have inside your data center as switching and routing, it doesn't matter. We provide you a framework that you can use in any data center as long as forwarding IP addresses. And that's something you will get all the time from every data center. Otherwise, you do not have this integration. Now let's look at how, and the long stack, you have multiple flavors of how you integrate the networking piece. Usually, if you don't use our technology, it's either the Linux-based approach or it's open V-switch or ML2 interfaces towards the switching environment. And what it causes is that you spin your workload and you dedicate functions and you want to avoid that, for instance, a VLAN that you use for security separation is spread across your entire data center and you have a source of nodes because that will be quite messy. And you will experience your switches. Usually you have a 4K boundary of how many VLANs they can tag in and out. So trying to enhance this is quite messy and what the jubilee approach will allow you is our technology to seamless integrate wherever the server inside your data center is hosted. If you use our technology, as I said, we will allow you to use every virtual machine wherever it is on any host in your data center to be connected with every other virtual machine and to the outside network, no matter how you do that. What you use as a transport technology inside your data center is something that you can choose for many designs. You can use a modern spin leaf architecture. You can have a flat L2 network. The control approach does not attempt, by default, to integrate into this environment. The approach is to overlay it with a tunnel transport technology. So we come to how the approach works but let's also see how we enhance OpenStack. With contract technology you get something that we call a network policy. An network policy in OpenStack you already know the concept of OpenStack security groups and they are usually bound to specific virtual machines and not the networking piece. Now here with the users of virtual network policies you can define how traffic can move from the virtual network that you have just defined say it's a green network to a red network for the same tenant and specifically allow the traffic that you want to pass between these two networks. So you do not bind the security towards the machine. You bind the security and the forwarding that is allowed to the virtual network that you have created. So that's a different approach in how you can grant security policies by defining it for the digital network that you have created and then define who can speak to each other, which networks are connected to each other and which networks are probably not connected to each other. And on top of that if you use such a policy from contract then what it allows you to extend what OpenStack you can do in the regular case for you you can change the topology with the usage of our SDN controller to force traffic through either virtual or physical instances that are inspecting the traffic. A normal example would be say you have a web server farm and you have some databases both are sitting in different networks but they belong to the same tenant, to the same OpenStack project. Now you see that there is a security breach and you want to inject a virtual SQL injector protecting virtual machine so usually you would have to tell your applications that the default gateway is changed because you want to direct traffic from one network not going to the default gateway you want it to have it through a virtual machine that implements this service. Now contrary will change the topology to force the traffic seamlessly for the application to go through this virtual washing machine, firewall whatever you have to steer the traffic selectively through what we call a service chain. And this is something that is also in bound with Etsy NFE, in Etsy NFE terminology they call it a service graph. We call it service chaining, I think this has more grip you can easily more understand it than saying it's a graph. Now how do we implement this? Now I need to go through a little bit of more technical details the realisation of virtual networks and how we can connect virtual machines to each other on two new units. At the top we have OpenStack as an orchestration engine and we are plugging into the neutral networking so that's how generically you can attach any of your networking instances to OpenStack. So all commands come from OpenStack so you will still be able to use OpenStack APIs to create virtual networks and we will get the appropriate messages from the OpenStack orchestrator and there is an essential SDN controller usually there are three of them for backup and redundancy reasons who has the entire knowledge of all routes inside your single data center so no matter where you spin up virtual machines it knows everything it knows which IP addresses are which host IP addresses are each server and what is the virtual machine connected that Nova has distributed there and spin up and then we have to distribute information to a distributed piece which is here the control virtual router which is basically a replacement for you have Linux Bricks or OpenVswitch now you rip that piece out and you replace it with a control virtual router which will then be the translation engine between the virtual networks in Linux usually tough interfaces in KBM towards the physical network intrastructure cards where you have your IP addresses assigned to the physical environment now you cannot escape from any virtual machine sending traffic to anything else either the traffic is hairpinned on the same compute node just to another virtual interface but it has to go through the virtual router and when traffic has to go from one virtual machine to the next host in your environment what we add is a tunnel information that takes the raw Ethernet packet from the virtual machine encapsulates it and sends it over to the next compute node who does the decapsulation and then forwards the traffic to the virtual machine as your destination IP address and with this we reach a complete independently from what you have in your physical infrastructure how you transport the traffic from one compute node to another compute node inside the physical interface cards have their fixed IP addresses and those are used in the outer tunnel information then a little bit more information is added to know which virtual machine you are targeting on the other end and then the traffic is forwarded now which technology we are using here is we have currently implemented three flavors of over the tunnel we have implemented MLS over GIE we have implemented MLS over UDP and VEX now so no matter what you prefer you can select which method you want and then the framework will make sure that the tunnels are built appropriately because the virtual router does not see all the global information that the SDN controller sees the virtual router of control in every compute node just see what he needs to know how he is connected to other virtual machines that is our mission and he gets this information from the central controller now what happens if you use something which I just called service chaining service chaining will introduce a topology change instead of having the traffic from green going directly to red as in the last picture that would be possible we force the traffic to go to the next top is a virtual machine that has an interface into this green network but it's not the default gateway we just forward the packet internally to that service instance and that service instance is a regular virtual machine all it needs is just two interfaces instead of one because it's not an application which has an IP address which you are addressing it's something that may not have an IP address all it sees it has an ingress interface and an egress interface and all it's mission is if a packet arrives on the ingress interface I copy it over to the egress interface and I apply the service whatever this service is is it a firewall is it a say a virus inspection machine you can bid what you want all it does is packet comes in packet is forwarded to the other interface it was further and I apply the service and everything else is a regular open stack controlled virtual machine that you spin up and you just change the policy to say okay I want this machine to be included in the past between these two networks that's the mission of service changing done by policy change and that's where SDN comes in the picture because you can have a network that is directly connected so red green can talk to red directly as in the last picture and then you just say okay I have this new virtual machine do the topology change for me and that's what the SDN controller will deliver now in the last two pictures we had just compute nodes talking to compute nodes so that's egress traffic you have now at some point in time you will have to go to the outside you will have to go to the internet you will have to talk to the one so we need an integration into something that at the bottom of your data center routed towards the one or the internet or even if it's just plain IP acts as a border gateway and Contrail has decided to use the technology that usually is already in place because if you are a network service provider this is not an unknown thing you use signaling protocols like BGP and you use in your one maybe MPLS based forwarding but what you need to have is just signaling towards the data center gateway so the Contrail data center controller connects via directly via BGP to your data center gateway and that's all it does so if traffic comes on from the outside and wants to go to any destination IP address that has a virtual machine then we tell the data center router okay your next stop is the physical IP address of the compute node and you have to add a label and then you are pressing this virtual machine okay don't get worried about me using BGP, MPLS and this stuff you don't see that if you have your data center ready or the virtual machines they don't see nothing of that it's just a first time setup and the first time setup on this data center router it's maybe 20 lines of code that you have to do to just enable the BGP BGP talk to the data center controller and being enabled to do the forwarding so we know that usually people who have an enterprise background say okay BGP I'm out of this no it's simple it's just 20 lines of code to implement this on your data center router in off-hugo and the rest there's no integration into the physical environment you don't need to know this all you're talking is IP addresses and the tunnels will do the forwarding automatically for you last but not least how do we integrate into bare metal? you always have something that you cannot load the contract virtual router because that's something that if I want to have to spin up virtual machines you need to do it some way so what happens if that's the case well there is then definitely we need to integrate into the switch that has the plain Ethernet connection to this server the appliance or what you have whatever it is so we need to integrate to that and usually what people use here is they use top-of-rec switch which have the VXLAN gateway functionality and we connect to those switches we are OVSD in as a single little protocol and then we can make these workloads appearing seamlessly in your data center as infrastructure we also provide stuff like floating IP implementation because we implement the floating IP nut traversal on the data center router so the virtual machine doesn't know that you are changing the IP address but that's implemented then on the data center router so we also if that's normal that your bare metal server needs to get a floating IP we can provide this now multiventer integration so by default you get a kernel module which you implement into your Linux system and usually KVM is the dominant hypervisor but that doesn't say we cannot provide the same functionality to other hypervisors and here is an example of how we integrate into different hypervisors and the example is the VM where ESXi hypervisor now Contra or Juniper or let's say Anybody is not allowed to place any product into the hypervisor of VM where it's a proprietary hypervisor and you are not allowed to extend it otherwise you lose support from VM where that's the mission here so what we do here as an alternative is that we spin up the virtual machine on every compute node not integrated into the hypervisor but we steer the traffic first to this virtual machine and this virtual machine connects it then to all the virtual machines on the ESXi so and with the latest build in version 3 the whole stuff can be integrated into OpenStack and it runs simultaneously KVM workloads and ESXi workloads it's just a little bit different on the KVM compute nodes we are part of the kernel as a model on the ESXi hypervisor we need to spin up a dedicated virtual machine doing this forwarding transport but it's for customers who have legacy ESXi deployments one and a half immigration paths and that's what we are providing here let's skip this Docker surely can integrate into Docker as an environment the same technology will be used to provide the same network and the same abstractions to any Docker container that you have may it be in OpenStack may it be in Kubernetes that's your freedom of choice the technology will provide the same functionality to any Docker container that you develop I'm not going into service chaining heat templates we have so service chaining is something that's new to OpenStack there's no native API you can use today people are working on that and hopefully we'll have something that we can use as an API today for those kind of cases we provide you heat extensions that you can then leverage when you have to roll those things and have to connect all these different pieces in this environment last but not least when we integrate our contrary virtual router we are sitting with every packet that you send from the virtual environment to anything that's physical or to the one environment we see these kind of packets now Contra also implements a reporting engine that allows you to have and collect all this data into a database and that database you can harvest after an event has happened and you don't know what was the status in my network one hour ago this virtual machine has talked to each other you can contact you can extract this database in in charts and analyze it or you just go with a raw data stream and do your own reporting on that that's a function we also provide that we collect all this flow information and you can you can use that so with Silometer you don't get this kind of information it's not retrospective you can't consult any database after a certain period of time and say okay this guy and this guy have talked to each other and they were transporting so much bytes if you want to inspect traffic that's possible we allow you to launch a virtual virus and then you can select which traffic you want to inspect in this instance otherwise you would be running around connecting to monitoring ports and trying to get something out of sense with that of course Contra provides you all the steering functions to send a copy of that traffic to your analyzing image and the wire shark instance here is an example of what you can analyze essentially every traffic that is relevant for me that I want to know then for people who want to know what the status is as I said you do not need to integrate into the physical environment into your switching environment but there are people who want to know what the physical environment is doing and how the traffic is routed now we use standard interface like LDP, SMMP to gather that information and present it to you in a topology chart and if you want to extend that you can also select a certain flow and say ok let me know how the traffic actually goes through my network so that I know what is impacting this or how I can debug and where I have to send people to see if something goes wrong thank you how many minutes have I been over it was quite technical but you see once you deploy it the virtual machine doesn't know about it all you see is enabled business and that's the important thing