 Hi guys, welcome to our session. This session is about how TechR works and our plan to deploy the TechR in the cloud. Hussein from Nuwaji and me will jointly present this session. My name is Jinto, and I come from China Mobile. I'm leading SDN team in China Mobile Sudo Research Center. Our company is responsible for building public clouds and private clouds for China Mobile Group. We have built 50 clouds across China. There are more than 10,000 nodes have been deployed in our cloud. There are three parts in our presentation. I will present the first part. Hussein will present the second part. We will leave five minutes for Q&A. OK, let's get started. First, let me give you an overview about China Mobile Public Cloud and what is our current implementation of SDN and NFA. And then the problem to manage these NFA components and what is our solution. And in the end, we will give you guys several examples about how to use TechR to solve this problem. I'm sorry. OK, let's have a look at China Mobile Public Cloud. They run the 3,000 nodes in China Mobile Beijing and Guangzhou data centers. They are running on top of OpenStack Halo. We plan to add 2,000 more nodes by later of this year. Two more data centers will be added soon. And the total size by 2020 of our public cloud will reach 30,000 nodes. China Mobile Public Cloud offers SAS, PATH, and ICE. Today, let's focus on the infrastructure as a service part. There are six major components in the ICE, computer storage, network security, management, and monitor. For the network parts, we provide the VPC service, CDN service, FAP, VPN service, and load balancer to the end user. Some details about our network service. From the user experience parts, Tennis can define the own VPC topology. And they can select the networking service like subnets, security group, virtual router, virtual firewall, virtual load balancer, FIP, PAT, rate limit, and VPN. Our public cloud supports a very large scale. Support 16 million isolated virtual networks. Support 10,000 virtual routers and 100,000 subnets. In our cloud SD network, all the virtual routers and switches are fully distributed. They are in no central point and no traffic trauma. Here is our public cloud architecture. It includes three tiers. The first tier is called the service logic tier. It manages the business logic of the cloud. It's called operation platform, which is a portal for the end users to manage customer configuration and service logic. The network administrator uses Neo1G VSD to manage the network policy. The second tier is service control tier. Other network resources are controlled and managed by Neutron in this tier, including network, subnet, support, security group, virtual router, floating IP, virtual firewall, virtual load balancer, and the VPN. The last tier is Neo1G SDN and the NV solution. This is current OpenStack SDN and NV management architecture. OpenStack Neutron manages the SDN controller of the virtual network-related operations and NV-related operations. Both manage Neutron for different plugins. On the left side is OpenStack controller nodes, which is on Nova, Keystone, Glance, Cinder, Neutron, Etisandre. Neutron called plugin, called Neo1G VSD REST API. VSD called Neo1G VSC via XMPP. VSC control, Neo1G VRS via VSDV and OpenFlow. VRS is on compute nodes and NV nodes. VRS implements, network, subnet, support, security group, DVR, distributed virtual router, source, entity, flat, floating IP functions. The firewall as a service, load balancer as a service, and VPN as a service are implemented by the firewall VM, LB VM, and the VPN VM, which runs on V nodes. 7.750 is controlled by VSC as... 7.750 is VSLangateV device for the South-North traffic. Here is the problem and the challenges. The first problem is as upstream neutral limitation, which is SD and NV appliance management deploy. Deploy coupled from the OpenStack management plan. It's very difficult to decouple NV function from the SD. As this limitation, it's also impossible to have multiple NV vendors work on top of the same SD controller. The problem two is that upstream neutral is quite difficult to support multiple vendors for the same VF because there is no general mechanism to manage the lifecycle of the VF. For example, it's quite hard to support multiple vendors for the firewall as a service. The problem three is current upstream neutral only support very limited VF function. For example, the firewall as a service only support ACL, no status support, no application layer feature support. For most commercial firewalls, only 30% of the functions are supported by neutral firewall as a service API. The problem four is there are no API defined for the security appliance. For example, IDS, IPS, WAF, and so on. The problem five, there are no mechanism to support the VAM resource auto scaling. For example, upstream neutral can support to add load balancer VM or to enhance the capacity of load balancer VM in a load balancer cluster automatically. The problem six, upstream neutral doesn't support multiple DC, SDN, and NV operation. For example, upstream neutral API can support spawn to load balancer instances at the same time in two different data centers. Now let's talk about the Tucker solution. Tucker is based on ETSI model architecture framework and provides FVO and VFM to orchestrate network service for end-to-end using VF. Tucker is designed to support multiple VM sites. Tucker has four kind of drivers, infra driver, monitoring driver, management driver, and SFC driver. If we use OpenStack as a VAM, Tucker's infra driver will co-heat to create a VF instance. The monitoring driver, management driver are used to manage and configure vendor's VF. The SFC driver is used to chain VF traffic by called SDN controller's API. Here is a new architecture. Neutron is used to manage SDN only. The firewall as a service, LB as a service, and VPN as a service are not used for firewall load balancer and VPN appliance. In the new framework, we use Tucker to manage the firewall, LB, VPN, the AT gateway, and all the security VFs. Tucker calls his SDN controller's API to interact with SDN. Now let's have a quick review of the problems which we discussed in the previous pages. The first problem, Tucker decouples SDN and NV management easier. The problem tool, Tucker enables SDN work with multiple vendors VF easier. The problem three, we can support all the commercial VF features because Tucker has no API limitation. The problem four, Tucker enables the security appliance management. VF appliance auto scaling are supported using Tucker's monitoring driver and management driver. Multiple DCSDN and NV management is supported because Tucker can manage multiple VMs. Now, case study, in this case, we intend to show how to use Tucker to support vendors advanced features and manage the security appliance. On the left side, we use Tucker's VFD templates to manage Hewstone's cloud edge firewall. Hewstone is a China local security company. After we implement Tucker, we use OpenStack to manage and configure amongst all the vendors advanced firewall features. This will not happen if we use upstream neutral to manage. And on the right side, we use Tucker's VFD template to manage Hewstone's IDS, IPS, and WAF. With this implementation, we can use OpenStack to manage and configure the IDS, IPS, WAF. What's more, we can change the traffic between the security VF. This will definitely not happen if we use upstream neutral. And this is a case study for VF appliance auto-scaling management. In this case, we use LVS and HE proxy to build a load balancer cluster. The LVS is LB front end and HE proxy is LB back end. Tucker is able to automatically add or remove HE proxy appliance to load balancer cluster on demand. Which means with this feature, OpenStack is able to auto-scale in or out for the VF functions. This feature is very hard to be supported by upstream OpenStack neutral. OK, that's all my presentation. Who then will present the next part? Thank you. Thank you, gentlemen. So equally important in this enabled and new architecture is the networking part. So we at Nuage have a virtualized networking platform that complements what's available in OpenStack to enable projects like Tucker to take full advantage of them. And I'll start with the management layer. At the top, basically, is represented by this virtualized service directory, which is a policy manager that has a northbound API. And we can support multiple cloud management systems. And that API can be used by any other orchestrator that either sets adjacent to them. And that's where attacker can make additional supplementary calls to the ones that are supported by Neutron to enable advanced functionality from a networking perspective. It's also a multi-tenant solution. So you can have complete isolation between tenants for the different network functions using the same infrastructure and platform. This next layer is the control plane, which is represented by the VSC, which essentially is a virtualization of a service router that Alcatel, Luz, and Nokia had for 15 years. So it's basically a full-blown service router control plane that supports all of the routing protocols, OSPF, ISIS, BGP. And what that enables you to do is to federate controllers. As you've seen from their network, it's a massive network in terms of the number of hosts supported that span multiple data centers. So your ability to federate controllers will allow you to add and expand your footprint across data centers by just adding pairs of controllers for redundancy and peer with a PR router to go across the one. The third layer is the data plane, which is another important piece here, where we allow you to support virtualized workloads, virtual network functions connected to bare metal assets or other appliances. And even if you're looking at containers, we can actually provide net distributed routing, switching, and service chain capabilities to connect those different workloads. So no matter where your virtual network function is and where the traffic it needs to serve or process comes from, you have that continuity, regardless of what infrastructure you have and where it is. The other thing is we work with partners on acceleration. So the Exxon offload, OVS offload, there's a lot of different network functions that need specific requirements that having a data plane that can adapt to those and enable them or the service chain and quality of service, like some sort of acceleration, that's available to you on the data plane. Now, how does this fit into the architecture? If you look at it, one of the problems was multi-data center or multi-open stack instances. So the solution, because of that ability to have multiple tenancy and then as well as the API interface, TACA can make direct calls to supplement the calls that come through Neutron to enable that enhanced or advanced functionality for the VNFs. You can actually span data centers whether with complete isolated open stack instances or one kind of open stack instances that spans multiple data centers. And from a service chain perspective, you can support scale out of a particular network function. You can have advanced functionality and you're open to a broad set of vendors that basically can be programmed through an API and then you can hook with advanced network functionality through an API from TACA to the new Azure VST. And this kind of concludes the presentation and will basically open up for questions if you guys have any. And if you do have questions, you can step up to the mic so the questions get recorded. Hi. So I'm gathering from the design that the essence of a service chain, for example, would be communicated to both the Nuage control infrastructure and then to the VNFs from TACA. Is that true? Yes. Hey there. Can you give a quick update on auto scaling? It was an idea for a while. And I mean, is auto scaling working in TACA now? Was it deployed recently or is it still in progress? OK, we just tested the function because the TACA has a monitoring driver and the measurement driver. So we use the open source load balancer like LVS and HProxy. They constructed the load balancer cluster. We just try. But it's not on the commercial use. Thank you very much for your time.