 OK, so, good afternoon, welcome to our session, the session calls operators experience and perspective on SDN with VLANs and free networks, where we would like to show you the perspective of our experience in operating different solutions, different SDN solutions in different customer environments and what we tested and how we built the infrastructure. The presentation, again, at first, something about TCP cloud, what's the company, what we are doing, sorry, presenter, something about Workday, then several words why it's happening here. The presenter, I don't know, it started to, I will use it without presenter. So, something about OpenStack networking, the key points, then the criteria for the enterprises, because we have more than 15 or 20 enterprise customers and they have different needs, so we will bring the perspective on this. Then the use cases and finally we try to do small comparison, our personal comparison of SDNs. So, what is TCP cloud, I don't know if someone was at yesterday keynote, but we are company which focusing on building private cloud solutions for the enterprise customers and we are also, we have two domains, one is private clouds based on OpenStack and second is IoT Internet of Things platform where we integrate this open source projects together. We will today talk more about the networking stuff and OpenStack. And something about Workday, this session should be with Edgar Magana from Workday. Unfortunately, he cannot be here because he was voted into user committee and they have some parallel meeting, so Edgar will not be there, so instead of him is my colleague Marek, who is chief network engineer in TCP cloud. So, the Workday is the customer who use the Juniper Contrail SDN, it's a public reference and maybe what's interesting that they use the L3 Fabric everywhere, so they don't use the VLANs, but they have underlay network based on L3 Fabric and then they put on top the Juniper Contrail. And now let's jump on the networking. So, all clouds are about networking and every time when we speak with the customer and we are doing some case, we are discussing stuff about networking. It's the most controversial and problematic part because there's a lot of plug-ins and everyone has a different environment, hybrid environment and it's really difficult. And you usually have to solve issues like high availability, scalability, migration, multi-tenancy between the environment and together with OpenStack networking comes the NFV and function like usually a lot balancing as a service, firewall as a service, VPN as a service and what's the most important is that there is more than, I try to count the plug-ins and there is more than 30 or 40 networking plug-ins and every year there is new SDN solution which says, yeah, this is the best SDN than other SDNs and we are working with OpenStack and you should try it, you should do this, this, this, this, everyone does something else so it's completely impossible for someone who wants to try it, choose which is suitable, so we will try to give you our perspective of viewing how we see this. So if you look at the last OpenStack survey, you can recognize that most of the environments are on OpenV switch. The OpenV switch is used usually for the public clouds, for the environments with the V-lands, sometimes with the VX-lands. And from the historic perspective it was very difficult to make the OpenV switch and neutron working in the operation because of centralized network node. Nowadays when it's the DVR solution it's much better to manage it and then there are a lot of many other plug-ins. The first vendor plug-in is the OpenContrail and we will speak about this solution in this presentation too. So the general SDN objectives when we came to any customer it's like we need the secure multi-transit solution with network isolation, with the policy driven, we need to get the Bermertal server inside, we want to manage dynamicly and schedule VM, attaching the ports, create policy between the networks, the security groups. It must be dynamic, you need to have clickable and orchestrated solution with VNF, with insight, with the plug-in. So this is the general objectives, what everyone usually have like you and you are starring. But what are the critical points? I think that the first step is you need to decide if you want to use overlay or not overlay. If we start with the VLANs, there is a limit. You have 4000 VLANs, I know it's too much for some environment but most of big customer has issue with this. There is no failure isolation domain, it's difficult for the managing and our experience is that the companies are separated into departments, so you have network departments, storage departments and when you need to have new VLANs, it's difficult to orchestrate Bermertal devices. In a lot of cases customers has different boxes. So it's complicated. So it's a VLANs. The second one is the overlay. After VLAN came the overlay, so it's simple. You don't need to usually configure the network boxes because you have encapsulation, VXLAN, GRE, MPLS over GRE, many encapsulation and one controller which orchestrate everything and this is most of SDN like you recognize it like Plum Grid, Midonet, MNNSX, Nuage Networks and maybe much more. And then recently they realize the new SDN, SDN solutions, which started to talking about no overlay, overlay is bad. Two years ago, on one year ago, everyone spoke about overlay. Overlay is great, use overlay, everything must be there and now time coming into market and I can see that no overlay is bad. It must be some intelligent IPv6, not translation between this. So it's a project for Calico and Romana and by point of view it's more for the really cloudify workloads like application which don't need L2 connection, don't need the live migration, which is almost impossible in some enterprises to build the cloud for them without live migration, no IP failure. So you're just launching cloud native application and if you lost one, you redeploy anyone else and you don't care what's the IP address and I think it's a future. It's a future for the containers, for the Kubernetes, for these orchestration plugins. When you don't care where is it and what's the IP address, you just want to know where is your service endpoint. So from our perspective, the VLANs are not so suitable for the clouds, for most of our customers because VLANs are static and we cannot replicate an overlap solution next to other developers needs to launch multiple same environments next to it so it's not possible. Calico and Romana is the future and we don't have a use case in real enterprise customers. So we will talk about the overlay and overlay solution and compare them here in this session. So the key criteria for the enterprises from our point of view is usually some functions like long balancing as a service so the application must be long balancing so it's inside of the solution some long balancing. Then the direct traffic data path which was the most critical thing in couple releases before Juno and even in Juno is what not in the production with OpenV switch the direct data path from the north south you have to go through some network node failure of the network node and I remember that when we had it in the production we could not sleep because usually it was very difficult to make it up and running then north south communications this is the next point to look at the SDN and think about if your SDN goes through the network node if the vendor has some appliance some special appliance which can be scalable or if it can be integrated into each router existing network devices this is I think it's important then multiple external networks how easily you can create multiple networks because in enterprise customers you cannot came and say them you will have just floating IP pool if not and that's it so think about external networks then performance and scaling is there any reference for a customer who has more than 15 oz or it's just SDN with one booth and with the nice party so this is very important and then the bare metal connection because a lot of workloads especially databases and some other appliances cannot be put into cloud so they need to get them out and then there are some SDN optionals for some service providers optional requirements the first one the open source for me is the most important but it's not critical for others but for me it's most important because when it's open source a lot of people can use it you can find the use cases you can contribute it you can see the whole development it's not behind the wall that some couple of people developing something can show you the slides you cannot take it try it so it's difficult and you never know what vendor will do then L3 VPN EVPN capabilities most of service provider requires from us to get the L2 VPN in their network in their virtual machine and then the recent stuff multi cloud solution Marek will talk about use case and architecture how we build a multi cloud integrated solution with SDN integration of the physical low balancers most of company needs to have SSL offloading directly on the hardware not in the virtual machine so it's the next stuff and then some IPv6 support and Intel DPDK to increase packet per seconds into your routers and SRIOV so it's a new feature stuff so let's talk more about two solution after we give you this requirement I am sure that you will find maybe two SDN vendor solution which are suitable and we took the open control because workday use the open control and control and most of our customers 90% customers running on the open source control and we will compare it a little bit with the DVR so if you look at this slide you can see that it's not easy to understand usually for someone who start why your instance is attached to the Linux bridge then it's top port goes to some QVO port then to QVR port then you go into internal bridge then you go into VLAN and goes outside so it's difficult to debug it you don't have any analytics you have nothing and you will be if you want to operate this you have to have very good know how and knowledge and it's very complex by the way Linux bridge is used because OVS interface is not able to work with IP tables so they put in Linux bridge between the solution so it's very complex and it's L2 and you need to bring your external network into each compute node in your infrastructure because otherwise it will not work so you can imagine that it's very difficult to put multiple external network in your infrastructure so I think that the use case for this is for companies we need just one external network simple solution not so big so this is my opinion and if we compare it with another open source solution open control you can see that the VM is just connected to the V router to the tap interface there are virtual routing forwarding tables for each network it's fully L3 it's overlay and you can have fully L3 fabric so you don't have to use any VLAN everything can be L3 rootable direct connection between them it doesn't use IP tables because if you boot 60vm on the server and 5 IP tables for every chain so 5x50 so IP tables are also not so very good for the scaling here are IP tables directly in the V router so this is the difference so last my point before I get the word my colleague is that the very good point is to have a direct datapad from your VM not only inside of the cloud but also on the outside so no network node I think DVR fix this issue but no also a proprietary gateway we have experience with two customers which tries some different sdn solution and they have a lot of issue with integrating external network through the appliances even if you can scale your appliances vendor solution appliances for 4vm or for bare metal machines you cannot get 9.6 gigabit throughput from your virtual machine outside of the cloud which is possible when you do the routing on the routers and not routing on the servers and I think that the routers are here for the routing so this is encapsulation usually MPLS over grand VXLAN with EVPN stitching so for example open control can be used with any network vendor Cisco, Juniper MX Alcatel we tested all these devices and terminate network there so this is from my site so Marek will explain more about other use cases what we did ok thanks the first thing I want to tell you are the considerations it's ok ok so as I said there are a few considerations you can do to have a better network performance between the virtual machines running on the different hypervisors and the first one is the encapsulation you use with OVS you got only VXLAN with open control you have a VXLAN as well but also MPLS over GRE and MPLS over UDP we are actually using these two encapsulations because the performance is much better than the VXLAN it's mainly because of the hardware of loading that suits better for like UDP and GRE than to the VXLAN and we only use a VXLAN if we want to north-south traffic from the contrail to the MX routers or other routers to get L2 stretched all over the data center so and when we are talking about scaling nodes and scaling the network I think that you will first run into the performance issues with rabbit rather than the networking issues with contrail so we actually sometimes switch off the analytics at the contrail because it gets really chatty no ok now I would like to tell you something about the features that are brought by the third party SDN controllers like open contrail and the first one is multi-cloud networking you can take like two different cloud providers like OpenStack and Kubernetes and connect them together through single SDN platform as you can see in the picture we can connect OpenStack together with Kubernetes so the virtual machines directly communicate with the docker containers and another use case can be having two separate OpenStacks for example on different geographical locations and you want to have the virtual machines on the same network so they can communicate on the same virtual network so this is how you can do it and the stuff behind this actually are pretty simple contrail controllers use well known BGP protocol to exchange the routing information so the only thing you do you actually create the BGP peering between the contrail controllers so the contrail controller actually doesn't even know that there is another contrail behind it he actually doesn't give it to them whether it is a router or another controller it just uses BGP to exchange the information so what can you do with that for example here we got Kubernetes and OpenStack together federated by contrail and you can have applications running on the Kubernetes because that's what our container is good for and but you want to have a backend running on the VMs so how can you do that you actually can do it with contrail by the BGP peering and if you are familiar with MPLS L3 or L2 VPNs actually what it takes is to put the root target under the VRF so to do that we only need to go the contrail controller of OpenStack and put the root target and then go to the Kubernetes controller and put the same root target and then can the containers communicate with the databases on a like same network L3 or L2 another use case that is quite good is how to connect the legacy world you maybe have applications maybe some kind of databases, Oracle databases which can be virtualized but sometimes you want to communicate with them on layer 2 so how can you do that there is a possibility with OBSDB contrail can manage physical boxes via OBSDB so you say which physical port belongs to which virtual network and then every traffic that goes to that goes to the interface is switched to the virtual machine and it's not vendor dependent so juniper switches or retest it also with OpenVswitch we put OpenVswitch on a bare metal server and we managed it with OpenContrail another stuff is managing the physical load balancers like F5 maybe you saw it with another vendors that they tell you that they can actually manage the physical balancers but it's actually through their UI so you just go to another dashboard and create balancers over there but I think that's not the benefit you can actually do it from F5 dashboard I think that the key benefit is that you should be able to manage physical F5 or other balancers via heat so you can have heat resources to actually manage the physical balancers this actually works with OpenContrail so as you can see in the picture it goes to the MX routers and from MX routers to the F5 and then you are just happening back to the router or you can go whatever else you want ok so the last thing that people are handling nowadays is how to use IPv6 in cloud not every solution supports that OpenContrail have full IPv6 support and not only with the virtual machines in the cloud but also you can expand it outside to the routers it's just another family protocol if you are using multiprotocol BGP so you just enable IPv6 tunneling on the routers and you got it ok, těku so we have the last 5 minutes so this slide just shows how we are deploying the OpenStack it's a logical model if I briefly explain this yellow are the OpenStack APIs and RubyMQ virtual machines everything is virtualized we are separating we have different approach than other vendors we are virtualizing and separating all the OpenStack services so this is the OpenStack APIs then we have 3 virtual machines with sealometer with MongoDB because sometimes sealometer broke up everything so you want to still have the functional of your cloud so we are separating this then we are separating proxies with the horizon and SSL proxies which proxies from outside of the world traffic to the inside to the n-genics then we have graphite servers for the collecting matrix from the cloud for our billing system and automatically integrated monitoring system based on open source monitoring framework and these 6 virtual machines are for the open control so we are running 3 2 kastandra clusters 3 virtual machines are for control config and control and 3 virtual machines are for analytics for the collecting matrix from the cloud this picture shows the cluster deployment I will skip it last slide where I tried to little bit compare the solutions so as I mentioned the important is the licensing so the control is fully open sourced no any limitation like other open source SDN same like DVR the hypervisor orchestrator we showed that we are able to scale your Apache server on the docker containers and your database is running on the virtual machines on the open stack very easily so it supports the Kubernetes because docker itself support for the docker itself doesn't make sense for us the only one case how you can use the docker is through some orchestrator like mesos or Kubernetes also support VMware vCenter this is same for DVR had some limitation because VMware doesn't want to have so much support in DVR because you should buy the NS6 and other SDN also support the VMware the support of VMware is every time it's through the virtual machine on the ES6 node all traffic has to go through this virtual machine except NS6 because all other vendors don't want to go into VMware kernel because it's not approved so they have service machine on each hypervisor to route traffic outside then the gateway routing as I said we are able to integrate this solution and attach your floating IP directly on edge routers in your DC and create any kind of VPN in DVR you have to bring and other SDNs doesn't have features for the floating IP association directly through the edge routers and it's easy because they usually prefer the VXLAN many times I came with the customer and they told me we need the VXLAN I asked them why and they said because vendors said that VXLAN is the future I think that MPLS over GR is fine and as Marek said the offloading on the Intel cards especially on old one Intel cards is much better than for VXLAN and the performance we are near the line speed with the solution in the VMs it's east-west traffic also north sound traffic and when we tested DVR we had some issue and we was not able to go through the 6 gigabits and when I discuss it with other people who use that they bought special NIC cards which provided better offloading for them so this is it and other SDNs because they have appliances and not routers they are also not able to go in one one session get this powerful performance so yes we are open contrail contributors and the SDN conclusion is that for us overlays still make sense because customer requires lot of this feature and they are not able I cannot came to customer and said no live migration no L2 no this stuff so it's almost impossible so there must be overlay and overlay brings the great integration between VM and containers which you could see on my keynote that we from Raspberry Pi created MPLS over GRE tunnel to the data center in Europe directly from the Docker container from this conference so it's pretty cool so join us our community it's huge on the slide customers are growing I think that lot of big enterprises it's running in the production like AT&T and other references what was mentioned on the summit so yes and I would like to say that it's not about Juniper as I usually also getting question that we are paid by Juniper we are not paid by Juniper we just realize that this work so thank you very much if you have any question please ask there is mics two mics where you can ask questions you mentioned that you sometimes found the analytics and the flow chatty did I understand that you turned it down because of that can you say a little more about that in the release of the contrail 2.1 or something like that we had some issues in the analytics because it sent information in each flow so it doubleized the traffic and we also had analytics integrated with the control and config together and we have one Cassandra cluster for everything and it completely destroyed the environment so we had to separate it into two Cassandra clusters and for some cases we disabled but the new version which was released has a special flow mapping and there are significant improvements in this analytics but yes we had to disabled so every software has some some mistakes so in our internal cloud we disabled analytics but at our customers they required usually that so it's running there hi thank you for your presentation so I have a question I understood the advantage of the OVA network based system opposite based system but I'd like to know your opinion about pure L3 based network as you presented as beginning of this presentation so in terms of advantage of the difference between the OVA network and pure L3 network as I mentioned if you mean the Calico and Urmana as I said it's a future and I would like to use that but the use case the customers needs the L2 for example between the VMs and as I mentioned the Calico and this stuff is more suitable for the containers and for the cloud native applications which are which was developed in this way but if you come to enterprise nowadays where mostly there is VMware v center and they think that they can take it like this and put it there and everything running it's not possible and you need to do some changes and what you cannot tell them is that there will not be any your features what you are using and switch here so this is the reason sorry, not a question just related to that, I thought I would just add that actually Project Calico does now support VM live migration as of the last year support it's possible to support live migration I like this project it's not like it, I don't like it I really like this new approach and I would like to use it and in some cases we are now trying to get it to customers so it doesn't support live migration so is there any other question thank you for your attention thanks