 With the rapid adoption of open stack in the telco world with the SDN and NFV getting more and more powerful, it has become a necessity to have multiple open stacks and not just have a single open stack instance and be enough. So, with that we are going to talk about a specific use case today. HP and Nuage networks have come up with a comprehensive solution to deal with this complex problem of multi data center integration. I am Nayana and I work as an architect in Helion carrier grade in the NFV division of HP enterprise. The use case we are going to see today. For example, I have shown here two data centers DC1 and DC2. There are multiple regions in each data center with the open stack instance. For example, KVM region and ESX region. Similarly, the other data center shows a KVM region and a bare metal. So, with multi hypervisors and bare metal integration, there are lots of complexities for connectivity between these two. And with this particular use case, we are going to show how we have solved this problem whereby the VM1 and VM2 in the first data center not only are able to talk to each other, but also are able to talk to the other data center VMs and also bare metals. With the communication service providers, there are lots of challenges with current limitation in technology and some of the complexities of the use cases. We will see few of them which are relevant to this use case. With HP is Helion carrier grade solution. We have a powerful SDN capabilities which help to solve some of these intricacies. Then we will see the actual use case of the multi data center with some of the details on the configuration and connectivity as such. We will go over some of the highlights of this solution for the use case and that we will cover with the benefits not only just for this use case, but some other use cases as well. So, let us look at some of the challenges that CSPs have today. With current limitation, the silo open stack instances in a single data center that is not sufficient to solve all the problems. So, the challenge for the CSPs is how to integrate a complex setup with multiple data centers not two, but multiple. Today, the neutron can manage the local network and its IPs. In case of multi data center, CSPs need to have a unified view of all the networks from multiple data centers as well as be able to manage all the IPs. The L2 and L3 networks today are confined to the local open stack, but with this solution, we are going to show how this challenge of integrating and extending the L2, L3 is solved. There are definitely existing VLANs and legacy equipment in the data centers which is valuable and with the cloudification, we want to make sure that this VXLAN networks are able to talk to this legacy equipment. So, that is another challenge of VXLAN VLAN connectivity. The security groups that we have today are able to scope only for that open stack instance, but in this particular case, we need to have a solution which can span across the data centers. With all these complexities, of course, the OPEX is a major challenge for the CSPs and how this solution will reduce the OPEX. We are going to see that in a while. This is only about the networking that I talked about, but the basic fundamental requirements of the carrier grade availability, performance and I think earlier Madhu had a session where he showed all about the tests that they performed on getting this performance boost. Similarly, a single vendor lock-in is a problem and CSPs want to solve that. So, with this solution that we have today, we can coexist with other vendors with their open stack and the SDN can solve the problem of talking to multiple VIMs. So, pertaining to this solution, let us look at some of the capabilities of the SDN that we have used in the solution. HP has an offering called DCN, which is the distributed cloud networking and it is a SDN solution which solves the virtualization of networks and as we mentioned earlier, talking to the multiple instances of open stack. DCN comes as a virtualized services platform. It has three major components, VSD. VSD is a network policy engine and it also has analytics or all the statistics from the networks that is collected that can be used by analytics software. The second important component is the VSC which is the SDN controller and this has the rich capability of connecting to the VSD for all the handshake and be able to route for the networks. The VRS component is the one that is residing in each hypervisor of the compute and that does the distributed virtual routing for L2 to L4 rules. There is offering from HP which is the 5930 top of the rack switch and with the integration with those switches, we have also been able to connect to the bare metals with this integration. Let us look at few of these constructs to help understand the solution little better. So, there are four constructs here. One is called the domain. This domain is L3 domain is equivalent to a router. Under the L3 domain, you have something called zone and this zone is something where you can have multiple subnets and a security policy that you can establish on this zone is the all the network end points will adhere to that policy. Subnet is of course, the L2 segment with the same broadcast domain and the security policies there is a way to have different security rules and put them together as a single policy. You can also create a template of these security policies and that helps in terms of automation. Talking about the security policies, there are few important ones I have listed here. There is a ingress and e-grace policy, so which controls the traffic from exiting the network and entering the network. Then there is a redirect target which helps in terms of steering the traffic to a specific port. For example, if there is a firewall and you do not want two different networks to talk to each other directly, but going through firewalls so that the firewall can control the traffic depending on the rule set. You can use a redirect target there. That also helps in this service function chaining. The forwarding policy helps to work along with the redirect target to steer the traffic within the cloud and also outside the cloud. These different policies can be grouped together in a policy group which helps to consolidate certain ordering and certain level of traffic rules. With the DCN integration, the important use case today we are going to see is the multi-data center integration which works with the SDN controllers talking to each other through the BGP border gateway protocol federation. Without this solution, if the virtual machines or any legacy equipment, if they have to talk to each other from the data centers, then they use the floating IPs for cloud. But floating IPs for CSP especially, they become very expensive soon. And with that kind of scale, it is not easy to go with FIP for all the traffic because the security aspect comes into picture. And that is why the solution with all the security policies that you can set in helps to overcome that problem. With this integration and a rich functionality of all the automation and the rest APIs, it helps to do the provisioning and deployment very quickly and also set all the security rules. With the service function chaining we saw earlier which can be used in Firewall as well as Load Balancer. We did a third-party integration with Palo Alto Networks Firewall and also Fortinet as well as for Load Balancer we used F5 networks. With the 5930 switches, there is OVSDB integration which allows the hardware VTEP and through that the bare metal integration becomes easier. There is operational tool set to help manage both underlay and overlay networks. This SDN solution with the DCN offering shines in hybrid environments especially when we have seen earlier that you have ESX region, you have KVM region as well as you have bare metal and some legacy equipment with VLANs. All of those can connect to each other with the custom made security policies that you put in place and it also helps us to coexist with other OpenStack vendors or other WIMS. The other use case was about the IPAM. For IPAM we used a third-party software from Infoblox and be able to not only use the IPAM for just vert IO but also non-vert IO connectivities for the SRIOV and PCA pass through. So, let us look at the specific solution for this use case. If you see in this picture you will see the Helion OpenStack carrier grade with all the core services from OpenStack such as compute, networking, storage and security. There is a DCN plugin which is a Neutron ML2 driver which helps to coexist with the SRIOV driver for example. The VSD in this case which is the policy engine is connected to the VSC which is the SDN controller through XMPP protocol. Multiple VSCs are meant for of course HF functionality as well as in case of multi data center you will have more than two, maybe four, maybe six depending on the need and they will be able to talk to each other through BGP. As mentioned earlier the VRS resides on each hypervisor, each compute and that does the routing and switching function. The VRS talks to the controller through open flow. There is a VSD dashboard which allows administrators to have a visual on the network connectivity, different domains, zones, what security policies as well as for automation and orchestration they have REST APIs. So this REST API layer which is the northbound can be used by the orchestrator software. HP has an offering called NFVD which is NFV director and we have done a successful POC with customers using the orchestrator all the way from this integration to offer the SDN NFV solution. So this is the same picture that I had shown in the beginning where the question here is the connectivity within the data center, within the open stack instance and then the connectivity across the data center. Not only for the same hypervisor but multiple hypervisors as well as the legacy VLANs and bare metal. So this is one specific diagram showing two data centers and here there is a VSD which is a common VSD for both of these data centers. So depending on the scale and the practical aspects you pick one data center to install the VSD and normally for production we always suggest that it be done in a HA fashion. So here you will see there is a 3 plus 1 cluster of VSD and there is a northbound REST API as we discussed earlier which is used by the NFV director in this case. The two boxes that you are seeing are basically two different open stack instances in different data centers and the VSC which is the controller will be also in HA pair in both the data centers and that VSC will talk to its individual computers on that specific open stack instance. And there has to be a connectivity established ahead of time for both the data centers for the IP connectivity and once that is done through the configuration and the software integration we are able to demonstrate that the VMS and the bare metals can talk to each other without using floating IP. So this is a similar picture but with little bit of more details in terms of connectivity. You will see that the common VSD here has 3 plus 1 cluster it uses MySQL database internally and the plus 1 is meant for the statistics. So all the statistics and analytical data is collected in this specific node and that can be used by the analytic software for data mining and such. The two data centers that you will see here there is a pair of VSCs which is in yellow color. So those are because of the HA pair and similarly data center 2 has two VSCs. You will see two computes at the bottom and these two computes as you can see are having redundant connectivity to each VSCs. So at each level we have made sure that HA is excuse me at each level HA is configured also at the port level and the nick level there is a bonding that is used. There is another pair of servers that you can see called VRSG. VRSG is a software gateway that is used for the external connectivity with the cloud for the floating IPs. There is also another flavor of the solution where the VRSGs which are the software gateway are replaced by the 5930s which is the hardware VTAP gateway and you can use it that way too. As you can see each VSC is connected to the other VSC in the same data center as well as it is connected to the other two VSCs in the other data center. So there is enough redundancy there as well and each VSC talks to each other and being configured as a BGP pair. The multi data center configuration the basic requirement is that these two data centers need to have IP connectivity. One of the data centers a VSD is deployed with a HA 3 plus 1. Helion carrier grade instances of OpenStack individually in each data center you can have one or multiple OpenStack instances as well and it can also have a different vendor OpenStack instance as far as it can talk to the DCN it will work. So there is a supportability matrix well published. There are of course necessary route entries need to be done as part of the configuration on the VSC as well as the VRS which are the compute nodes. All of the all of the VSCs which are the controllers are configured with the common VSD which is the policy engine and there is a specific BGP configuration that needs to be done so each VSC can talk to the other pair. With the Helion carrier grade we also have solved few other use cases such as the firewall as a service. As mentioned earlier with a firewall as a service we have done some integration with third party and that way we do not have to work with specialized hardware we can use it as a firewall as a service anytime on the cloud. Similarly the load balancer is done with the F5 networks integration and IPAM with the info blocks. Let us look at some of the solution highlights from Helion carrier grade. It has a deployer or a life cycle manager called HLM. The administrator can use and define their cloud and do all the configuration management ahead of time and have lot of flexibility in terms of how the cloud is defined. We have seen earlier that high availability as well as for demo or for test QA purpose you can have a single component deployment. The multi tenancy, multi region and multi hypervisor is all solved through this solution. Bare mental integration is done with the life cycle operations upgradability is available. Some of the carrier grade features which are important for CSPs such as availability, performance through SRIOV and PCI pass through and there was a session just before this which where they talked about all the performance characteristics and benchmarking. Fall detection and recovery is very important for CSPs and especially low latency detection and recovery. Live migration or host evacuation are all part of that. We have already seen some of the DCN capabilities so I will just go over it quickly here. It has a neutron ML2 driver which can coexist with other drivers such as SRIOV. We saw the multi DC use case in detail. 5930 integration allows the OVSDB and some of the DHCP functionality. It also helps the orchestrator to directly talk to the switch and be able to automate an orchestrator. VXLAN and VLAN connectivity is established through that and there are rich security policies. So this is a block diagram for how the helium carrier grade looks like. Here you will see at the bottom you have some of the common services which are like monitoring, logging and fault management. Then there is a HLM which is the deployer. At the top you will see there are shared services which are core services for Keystone. There is some LDAP integration involved. You also have Horizon and other services as well. For backend storage we use 3-par and left-hand networks. For neutron you have seen that we have the DCN ML2 plugin and SRIOV plugin and we have done some third-party integrations with IPAM, Infoblocks, also Firewall. The middle layer you will see that these are different regions for the hypervisors and bare metal. So there is a KVM, ESX and bare metal. On KVM and ESX hypervisors there is a flavor of VRS for the routing function that resides on each. So when I talked about the HLM and being flexible in terms of deployment you have four different ways you can deploy the open stack. One with the single region there is no DCN involved there is a single AVS kind of computes. Then the other three comes with the DCN integration. One can come with just KVM region you can also have KVM and ESX region and the third you could have KVM, ESX and bare metal. So there is a lot of flexibility that is available for the administrator depending on the use case and the need they can choose to deploy. Sorry about that. So some of the carrier grade key capabilities we have seen. Some of them we talked about in the earlier session but I will just cap here. There is no single point of failure so we have made sure we have HA at each level including the ports, the computes, the controllers as well as the connectivity. At the data plane performance we have seen that we get a line rate near line rate throughput and from manageability perspective there is a hit less upgrade as well as the performance management through silometer and heat. There is advanced resource scheduling which helps to boost the performance and for the administrator to be able to do you know things ahead of time. So these are some of the solution benefits that we have seen. There is a HLM which is meant for deployment and flexibility and life cycle operations. The DCN capabilities which is our SDN solution to give the multi DC deployment, some service function chaining and security policies. Carrier grade capabilities for availability, performance and manageability and overall to reduce the OPEX with this SDN and NFE automation and orchestration. There are some carrier grade proof points which Madhu and Eddy has already talked about earlier. We have done some comparison with the standard open stack on Linux versus the carrier grade open stack that we have. For example, if you see the detection of the failed VM can happen very quickly within 500 milliseconds compared to more than a minute. Similarly, for the compute node failures, the control node failures we have HA and that helps to get the failure detection very quickly within 25 seconds. Then there is a live migration DPDK apps that gives 200 milliseconds time as well as any kind of detection of the link happens within 50 milliseconds, but we do not have proof data for the standard open stack at this time. So with this we have seen earlier in the beginning that there are some challenges that CSPs have for this complex multidata integration. For automation and network orchestration availability and performance, multidata center deployments and the DCN capabilities. And with HP's Helion carrier grade we are able to solve that through the HCG DCN and NFVD integration. We have done the carrier grade hardening on the open stack instance. We have seen multidata center deployment as a use case today and in the SDN capabilities allow the network to be extended to the other data center. So it can talk to the VMs as if they are on the same L2 domain. So with that it's the same slide that we started with with the solution. We are able to get the connectivity within and across the data centers. Thank you. So are there any questions? Could you comment on the scalability of the solution? How many VMs or how many hypervisors you might be able to control under one of the VSDs? So as far as Helion carrier grade is concerned we have gone to scale of 50 computers, 50 hypervisors on the KVM side and you can have similarly on the ESX side. Now as far as the deployment is concerned, since ESX is a third party we will use the existing ESX deployment that you already have and we will have this integration and some of the components that we have to install to get the integration done with the configuration on the ESX side. So 50 hypervisors? 50 on the KVM side yes and then you can also have bare metals which are your existing VLANs. So you can have multiple regions for example if you in our next release we are going to allow up to 200 nodes. Thank you. You mentioned in the two-data center case that common VSD is deployed on one of those data centers. Yes. Does that not create a single geographic point of failure on that data center? I mean if something were to happen to that one data center and the entire thing was taken out of action then how could I configure the network on the other one? Right, so it's a very good question and that's why in the diagram you must have seen that there is a HA which is a 3 plus 1 cluster. You can also have this cluster geodistant. So if you want to have two servers on one data center and two on the other that's allowed as well. That's a more of a deployment choice. Thanks Naina for sharing your experience around this. Quick question. It seems like in this particular use case which you represented the VRS was still at layer 2, layer 3, VXLAN, VLAN. Have you looked at extending BGP or MPLS all the way down to your VRS and how that can happen because that's a pretty important use case when it comes to CSP. Okay, so as of current release we have not but we will definitely take that input and look into it. So my question is more around the DCN extensibility and plugability. So VSP obviously is a new eyes network architecture. Is HP looking to expand this to support, validate and test other vendors data center architectures such as Cisco, Big Switch and others? Are they pretty much partnering with NewAge on this issue? So today with the SDN integration that you have seen it is with NewAge networks. We do have 5930s as one of the switch offered today for this integration but we are looking into offering the support for a few other switches as well. But it is in the future. But just these switches not the actual overall SDN controller and underlying guts underneath? So today in the current release we have DCN as one of the options. In the next release we are going to have another SDN option which is called context stream. So slowly we will expand. Thank you. Can you tell us how you handle the production upgrades and rollbacks? So usually for the KVM region there is a in place upgrade story that is being worked on today and for the DCN upgrades there since they are all in HA pair for the production you always do one at a time. So you do for example one VSE you upgrade and then fall back to the other and then do the other one. So it's in one at a time. But is it manually managed by the personnel who's doing the upgrade or is that something baked into your product that you say upgrade and then it goes and upgrades it in such an order that the outages are prevented? Right. So today for the DCN we have manual instructions for upgrading but we are working on the automatic upgrade in the next release. What is the maximum data center size you support with one instance of helium? So as I mean how many compute I'm sorry if I missed that. Okay. So he asked the same question earlier. In the current release we have 50 compute nodes on the KVM region. You can have bare metals and existing equipment which is just connected as well as on the next release we are increasing that to 200 nodes. HP has been continuously changing the reference hardware for the helium to be certified. What is the current hardware briefly? Is it C7000 base or HP DL series base? So we have both DL380s, Gen9 and also the blade servers. Okay. For the blades there's a possibility to use the virtual connection modules like a top of rack but there are no ML2 plugins published by HP and I know you guys have inside of your solution there are ML2 plugins. Are you guys planning to upstream those? Thank you. So as far as the ML2 is concerned it is already upstream from New March. It's already upstream. Not for the VTEP but for the Neutron. Any other questions? Okay. Thank you.