 So good morning. Hope you have enjoyed the keynotes. I'm Fuqiao from China Mobile. I'm from the network and IT department. I'm the project leader, project manager for the China Mobile Experiment Network. And also I'm responsible for ad cloud design acceleration, lots of enough related topics and works within China Mobile. So in today's talk I would like to share the national experiment networks of NFA testing in China Mobile, what kind of testing we have worked down and ten lessons we actually learned from all these testing. So I will first begin with some of the basic future network architecture in China Mobile. I actually use this picture a lot when I introduce in the future network for China Mobile. We say that the future network is based on new data center, new network and new administrator, which the new data center is based on traditional central office. We will redesign so that it will have the inner free infrastructure base to support all those virtualized network functions. And we have our new network based on SDN so as to improve the agility of the network. And the new registration based on an administrator like on-app to provide a service online and improve the service agility. The future network of China Mobile they're constructed as what we call the telecom integrated cloud TIC. TIC is actually standard units to construct the future network. It is deployed in a hierarchical manner with core TICs stayed in the major provinces and districts and ashticks will distributed to far cities and counties. We say TIC is a standard unit because it has limited number of design templates for NFEI, for VM for NFEO. It has unified hardware models and it will have standard network design. So we actually begin the kind of NFE work back into 2012. And then in 2015 we built the first open NFE lab in Asia. It is also an open NFE lab that we donate in some of our resources to the open NFE community for NFE related testing. And within that lab we actually have about a 13 vendors that joined the lab testing. And in later 2016 we kind of evolved this open lab into what we called an overnight experience network. We are targeting on future network architecture validation. We hope that we could build an experiment network so that we could validate the new technologies like NFS and SDN, see if these new technologies could work in a large scale networks. So since late 2016 we built this experiment network in four provinces. We have phase one finished in 2017 and phase two finished this year and we now just begin phase three. Within the experiment network now we have seven data centre size include and we already constructed 15 TICs. We have already looked into VNFs including NB and Virtu Beras, Virtu CPE, the E-Bot that actually did the name for Virtu CPE for enterprise user within China Mobile. And we are also looking into VNFs including Virtu CDN and 5G cloud VNFs in phase three of this year. We actually include lots of vendors into this experiment network. We have like, I have accounting, we probably have nine Virtu infrastructure vendors, five VNF vendors, three out of the state of vendors and four SDN vendors by the end of phase two. And now adding phase three I guess the number will increase as well. So this is a basic idea of how we actually promote this whole experiment network. It is not just the networks that you do testing but we also include self-integration and some key features review into the whole experiment network work. We are trying to promote this work in three threads. First we will do the self- integration of the hardware, the different software including SDN and Australia for multiple vendors. And with this progress we actually can figure out lots of potential problems. And we could get first hand experience for this whole future network integration and operation. And where we should change, what we should learn in advance. These are the new things that if you want to operate a new NFV network. And then based on the new networks that we integrated, we are doing multiple testing include virtualized infrastructure testing, multiple VNF testing, SDN testing, manual testing a lot. And then based on all the integration and testing experience and results we get, we actually begin to review the key features like is the virtualization of telco services ready enough? How should we actually benchmark and choose this virtualized infrastructure layer? A lot of different things actually can be answered with all these testing. So in this today's presentation actually we'll share 10 things that we actually observe and hope it will help some of the community work here. A little recap on what we do for the passing two phases. In phase two, which begins late 2016 and in the middle of 2017, we actually include five data centers across three provinces. We test the virtualized platforms from five vendors. Mostly is what we call the traditional IT vendors like Wind River, Red Hat. And we have at that time one SDN vendors included for the controlling of network within data center. We have three services considered then and at that time we constructed 14 ticks. And beginning in phase two, we actually include more data center and we have new provinces Beijing to join the whole experiment network. We continue testing the virtualized platforms with three chosen vendors from phase one and we also add extra four vendors for phase two. And also in this phase, we actually increase the virtualized infrastructure testing cases to about 200 more cases. And in phase two, we have more SDN vendors join the test and we also test the SDN control among data centers. And we have new services included for TBRAS for phase two and we have 15 ticks constructed in total. In this September, we begin phase three. We still have seven data centers for provinces. But this time, we move more focused to Edge Cloud infrastructure testing, integration automation testing, ONAP few trials and 5G network testing. So then I will give some of the lessons that we learned through all this work. The first is what kind of feature actually should be tested for virtualized service. This is actually a question that will often be asked, especially from IT vendors or small companies who are not actually the traditional vendors involved in this whole NFE activities. They are kind of curious what kind of testing you actually have to undo when you have all these VNFs moving to your network. For now, we actually, we are still actually following the traditional telco services testing routine. It's like you have functional testing, you have availability testing, you have performance testing, but also we add some new testing about what we call as virtualization testing. So function testing is easy to understand is the traditional network functions that you should test following the 3GPP specifications, making sure that the virtualized functions appears the same function as the non-virtualized ones. But because they are virtualized, you probably would add more virtualization testing to make sure that they have some so-called virtualized features like scaling, scale-out, automatic deployment, working with the manual. And for availability testing, this is a little bit different from what we've done for the traditional physical network functions. For the traditional ones, they actually just a giant box that you have to do the HA testing. But now things they are virtualized, you probably should look more deeper into the HA not only for the hardware but also for the software. Like we have testing for the VNF layer, making sure that a function should follow the same SLA as the traditional services and also for the VNFC layer to make sure that the VNFC should be capable of self-recovery. And for performance testing, we are actually having almost the same testing as the traditional physical network functions, but we are kind of want to also figure out with the same throughput requirement, same with the physical network functions, how many servers we actually need and how is the performance. If probably there are too many servers required or products can hardly meet the performance requirements, it probably make no sense for us to virtualize these services for now. And lessons two is our telco services ready to be virtualized and deployed. This is actually also, I will be asked a lot in some different situations. When they say, do you think really some of these services could be virtualized? For us, you can see that most of our testing mainly focusing core network services at the beginning. And then in phase two and phase three, we add more edge services like virtual BRAS, virtual CDN. So we had some of the observation here. For the virtualization of control plain services, we think they are quite ready. But the virtualization for the user plain services probably needs some further testing and verification. For example, for the user plain, for the mobile core, the de-acceleration technology is something that we're worried about. And we also observed a certain difference of performance when using as our way and OBS DPTK, because OBS DPTK sometimes we actually observed some considerable performance fluctuation. So these are things that you have to take a better consideration into. And about the user plain for the residential gateway, we actually have a considerable performance loss if you use a virtualized user plain. So again, the hardware acceleration should be utilized. And also, we have done some performance testing with 25 gigabytes and 40 gigabytes NICs. And we see that probably for user plain, these should be considered as well. So mobile core services are mature enough. We see a lot of vendors. They already have some more product ready, things that they could put in our experiment network to do the test. But the others should be improved for virtualized features, like what we see for the virtual CP for enterprise users. In phase two, when we when we are do the testing, most of the vendors product, they cannot do scaling and scale out. And lessons number three is what kind of feature should open stack be tested. I also work in the opnf community. And this is the question always raised by opnf community because we are working on the the OVP program, trying to give some verification capability to the community on the the open infrastructure for telco. And one thing we are we are considering is actually, what kind of services we can do? What kind of testing we can actually conduct for these open stack services to make sure that they actually fits into the telco requirements. So for us, we actually have quite complicated virtual infrastructure testing specification. I guess for phase one, we have about 50 test cases. And then moving to phase two, we increase that to 200. And now I guess the specification is more than 300 test cases. So it's a it's a huge workload. We have a test case including functional testing, interface testing, performance testing, availability testing, and physical infrastructure proficient in testing. And for the functional testing, they are actually quite basic. You can imagine includes the virtualization function for the virtual hypervisors, virtualized resource management for the VM. They are actually quite general. But for interface testing, we are more working on making sure that the VM not spawned interface follows the open source open stack not spawned API, hoping that using this it will help us to ease the interface between the VM and the VM. And for performance testing, we actually have multiple tabs. For example, stress testing, how many VMs or subnets you can actually create for the given resources, compute testing, how much time it will take to create, delete or recover a certain amount of VMs, network forwarding testing with vSwitch and that's all we and also availability testing, including the availability for the VM, for the hypervisor, the VM and the database. And then comes to lesson number four, is open stack platform, good enough. We actually, we actually did a data counting once, how many virtual infrastructure vendors we now have. I guess the number is actually reaching to 12 or 13, actually, at least in China. And we have, I guess almost all of them tested in our experiment network, not fully tested, some of them are quite complete testing, some of them are just about in phase three, about the 50 test cases being tested. But we kind of have some general feelings for all these platforms. We see that almost all vendors, they could gain quite high pass rate for the functional testing, because they are quite basic. But the major difference actually appears in performance testing. And we do see a closed source vendor somehow have some better results. And most open source vendors, they cannot meet the requirement for telcoability. For example, we have a specific test cases for VMHA and hypervisor HA, which we also see some ongoing work here in open stack for the massacred project and so, but it still will take some time for the open source vendors to actually include these features into their product. And almost all the system that we tested remains to be improved for the capability of managing and control the physical infrastructure, which actually is a basic requirement for telco. And lesson number five, should we choose one vendor of open stack? This is actually a big question in China mobile. I guess it's also the same problem for the other operators in any other large enough countries. You have so many vendors and you have huge areas you have to provide the service with. So the problem is if you have one vendor open stack and you have a lot of customers, you have a lot of customers. And the rest of the API sometimes a vendor strictly follows, but we still observe that they may have some different extensions or parameters, which we take lots of time actually look into. And the IOT for hardware, open stack, SDN controllers, VNF manager and a field is a huge workload, especially when you have different vendors. They also have different version planning, and it's difficult actually for us to coordinate the different vendors for long term deployment. So bringing more than one vendor of open stack into our cloud is actually risky. But also there are a lot of other cons. It is actually risky for us to only have one vendor, especially when this industry is changing so quickly. And also in large countries as China, you can hardly have one vendor that could have the capability to support the cloud within the whole country, especially when you move to Edge, when you have more than thousands of data centers that you should provide cloud infrastructure with. It's impossible for one vendor to do so. And so for now I guess China Mobile is moving to the threat that we are having more vendors in open stack. That is the current answer I could have for this. And lessons number six is related with SDN. We actually have some testing with NFV work with SDN within the data center, how they should work, how the integration is going on, how is the performance. And we see that well it's possible that within one tick you could have VIM and SDN controllers from different vendors. Of course you could have them from one vendor that will ease a lot of problems. But to us the problem is the SDN controllers they actually have to work with a lot of hardware like the farewell, like the gateway, the TOR. And the interface between the SDN controllers and these hardware they are quite vendor limited. You cannot have different vendors of SDN controllers to work with other vendors of hardware. So somehow we are kind of purchasing these hardware, these farewells, gateways together with the SDN controllers. And then if they are adding together with the VIMs that will be a huge procurement for us which is somehow probably will bring some problems. So we are seeking whether it's possible that we could have VIM and SDN controllers from different vendors. So SDN controllers not spawned is actually connected with OpenStack Neutron. So that is a standard Neutron plug-in which we actually have found a little if any problems. So the main problem is actually located in the southbound API interconnected with the virtual switch and the hardware. For the hardware it's almost impossible for us to decouple the controller with the hardware. But for software we actually see that most of the VIM vendors they actually bring their own virtual V switch into the virtualized infrastructure. And so if the VIM and the SDN controller are from different vendors the SDN controller probably need to work with the V switch from a third vendor. And that could be a problem. Although we know that we have OpenFlow, we have OVSDB at this place to do the interface decoupling but they actually different vendors they have a different proprietary development of their SDN controller and their V switch. And OpenFlow and OVSDB are far from okay to cover all these interface issues. So for this we could see that it's still a long way to go. For lesson number seven, what's the difference between core and edge? This problem actually raised when we begin to build the ticks for the virtual bureau for the virtual CPE and for phase three for the virtual CDN. We see that the core ticks they are more cloud like data center. They have large amounts of servers, lots of services, huge virtualized pools for resource sharing and scale A and scale out. So it's more like just a cloud. So the mainly designed for control play, no specific requirement for acceleration for now. And the capability of service management of both core and edge like the nfio, the vnfm, all these things we expect that they will deploy it more in the core. But for the edge ticks they will be quite different in cities and small county side. Like in cities we still see that the data center will probably have more than 100 of servers. So that will be more cloud like in cities. However, when you move into the far edge to very small counties and even access point, probably you'll have quite small tick, less than 25 servers for example. So this will result you have to design a different infrastructure probably for this edge. We still hope that we could use OpenStack here, but probably should take better consideration for the whole footprint, for the whole network design for the OpenStack. And also for the edge ticks, they are mainly designed for user playing. So probably you should have more consideration on the acceleration, on the data forwarding performance. So I actually have another presentation last time in OpenStack actually give more observations for what we have did for the edge tick. And lessons number eight, more details for edge tick, what actually we need from operators. First of course is lightweight. This actually I also see there are a lot being talked about in this week and yesterday's presentation related with edge. We see that the control plane including OpenStack, Kubernetes, SDN controllers, they should all be considered to have limited resource to use at edge. And container is something we should consider, especially when you have MEC applications including this edge network, you probably need to provide with them the container as a service. But how to provision these containerized resources together with the virtualized resources you already probably have at the edge for the VNFs. This is something that we have to take better consideration. And also multi-view multistration and management probably is a necessary thing when the network is distributed in a large cloud. And so that you could ease the management for image, for patch management and for the API. And also on-man self-provisioning is something that you should care about more in the edge. And this also somehow require high reliability for edge and probably might be contradicted with the limited resources here. And acceleration. Acceleration is very important for the edge. Now we have DPDK, SLV, a lot of smart needs, FPGA, GPU, how to actually efficiently utilize these resources. And also to virtualize these resources so that the VNFs sit on the virtualized layer they can see the resource, you do the resource in a manageable way. And also an abstract API should be defined so that the VNFs sit at the virtualized layer, they do not need to worry about what kind of acceleration chips they are using. And we are also thinking that probably a general resource pool for both the telco functions such as 5G UPF and the third-party applications like MEC should provide at this level. So this means that you have to take better understanding on how to manage these unified resources and et cetera. And lesson number nine is how should we do the integration. We actually did lots of integration for the different ticks. We see lots of issues during the whole integration work. However we do see that these issues can decrease as our operating staffs gain more experience. And also onboarding test is very important since the software integration can change unexpectedly. And it is almost impossible actually to do all these testing purely manually. So we are kind of working on the automatic integration and testing tools so that we could do the integration and testing using some automatic ways. Currently we have a team within China mobile. We have developed almost 65% of the virtualized infrastructure test cases to provide the automatic test suites so that we could do the infrastructure testing automatically. And also we think the fine-grained automatic integration and testing procedure should be defined to replace the traditional integration and maintenance procedure in operators. This will bring key values for the network virtualization. So the last one, what else should we test it? This is actually what we also would like to conduct in the following phases of the NobleNet experiment network. First one is 5G. We see that there are still lots of technical gaps actually exist at the edge, like lightweight control planes, like accelerations. These things should all be tested in some experiment network to figure out the gaps. And another thing we should look into is the cloud native thing. Lots of discussion about the cloud native thing. It's not a community, but actually less few trials for telco use cases. A suitable framework to host the container and VMs should be worked out. And also the related manual workload should be also defined. And again, acceleration cost and power efficiency should be considered for software and hardware acceleration for implementation and abstraction layers, which is something this open source community should also work on. So that's all for today's presentation. Any questions? Thanks for presentation. I want to ask when we are discussing the vendors, do you mean all-day China vendors or no? We actually open to Chinese vendors, but also vendors from abroad. But you know, it's a national testing. You have to work in different provinces. So we at least have the vendors. They have some resources in China could support this kind of testing. So currently I can see we have Ericsson, we have Wind River, Morantis is sometime in phase one. So not only limited to Chinese vendors. So I think we have a lot of information about this. Thanks. Thank you for this great presentation. I'm just curious about something from this page. Against you have a very big experiment in the last one or two years. But here you mentioned the key technology is missing is lightweight . What do you think is the most heavy weight and need a lot of manual touching in your experience? That might be very good feedback to the community. Thank you. For our experience we see that the whole procedure that you bring up the open stack cloud, the VM layer, the hyperweather layer, they actually take more time. So another thing that you should worry about is when you have the illustrator, the VM and the VM working together. This is something take a lot of time. Although we have specifications, we have the SC, we have the open stack APIs, but always problem exist here. So these are things that actually take time. And for what is heavy weight, currently there is probably the open stack control plane and the SDN controllers are two things that we should need to take into consideration for lightweight design. Because for the illustrator and VMFM, you still have solutions to put into the core and have it controlled the VMFs located at the edge remotely. But at the edge you have to have your open stack. You have to have your SDN controller for now. For open stack, normally you have at least three control nodes. For SDN controllers, almost all the vendors reply to us, they have two servers they have to locate for the SDN controllers. So these are things we need to look into. Thank you. Okay. Hey, Sebastian from Interdigital. So quick question about the future structure of Edge Cloud and the outlook of what you're trying to test when it comes to 5G and beyond technologies. I mean, as you probably know, we are working closely in 3G and 3G together on SBA architecture. So the service-based delivery of the at the moment functions in the core network that should be delivered at some point in services and instances. So is this part of your Edge focus to actually support 5G? I think that is not a narrator. Thanks. I'm actually not responsible for 5G. So I don't think it's proper for me to answer that question. That is more 5G strategy things that I have to have my other colleagues to answer. For us, our targeting for the whole TIC including core and Edge, we're providing for the future 5G network. We're working together with the 5G people, hoping that we could fit into the requirement from the new 5G cloud. That is actually how we program this whole network. Yeah. Hello. Do you actually have any network function on production? Maybe control plane or do you manage to I mean, all this is testing but on production you release something? We actually have few trials for virtual IMS and virtual EPC but it's not a project. This project is only for technical experiment and then once that the VNF they are maturing now we put into few trials and future in production level. Yeah. And do you know if any of those if the IMS or the virtual EPC is running is it based on a solution with OpenStack and SDN integrated? Yeah. I know there are the few trials they are using OpenStack of course and for the experiment network and the few trials we use the same specifications for the OpenStack for non-spond API so they are purely OpenStack. Okay. Thanks. Thank you.