 Okay. Let's get started. Hi, everyone. Thank you for coming to listen to my presentation. So my name is Shen Wang from Intel, Open Source Technology Center. I'm focusing on the network in storage engineering. And we also have two guys. One is also from Intel OTC and he is my colleague. And the other one is from China Mobile. He is a program manager in China Mobile. So this is joint work with China Mobile, Intel and China Mobile, on the ONAP project. So basically I will cover three projects. One is called Aquino. The second one is called ONAP and the third one is called Stunning X. So we are going to cover the usage case, especially for the virtual CPE, to launch virtual CPE by ONAP and Stunning X. So this is my agenda. First, I will cover some of the brief introduction about Aquino, Edge Stack. And then I will cover the ONAP for the VNF. VNF is a virtual network function and CNF is container network function. So we use the ONAP to orchestrate and manage the lifecycle of VNF and CNFs. And Stunning X is infrastructure as a service. As we know, Stunning X is based on OpenStack. So we also add some features, special features on top of Stunning X to give the high performance and low latency and high availability solutions in Stunning X. And another very important concept in ONAP is called Close Control Loop. This is for the network automation. And I will also introduce the hardware platform awareness because we will leverage the hardware technologies, especially acceleration technologies to improve the performance of the network functions. And the last topic I will cover is virtual CPE as an example we tested with on top of ONAP and Stunning X. So Aquino. So what is Aquino? So everyone is talking about edge computing. So at the beginning of this year, so AT&T announced a project, a broader project inside the Linux foundation called Aquino. So this is a brief stack. It shows what Aquino can do. So in the bottom, it's called fully integrated OpenStack is to manage the infrastructure and also like the lifecycle, the infrastructure lifecycle for the edge applications. And in the middle, that's called the API layer. It's a middleware layer to expose API for the up-layer applications. The application means that some of the application edge application run in a user space. So they provide API for the application to assess the underlayer infrastructure and also provided a middleware and API to give the capability for the cross-platform interoperability because not everyone is running on AT&T or other the telco carriers. So the only top is a standard. It's open for the application, for the edge application. So we don't know what are they, but it's up to the users to develop the application on top of them. So this is a detailed Aquino edge stack. This is proposed by AT&T at the beginning of this year. And we will see some involvement, but this is the basic, the building blocks for the edge stack. So we are focusing on the two parts, the yellow box. So at the bottom is the hardware. The blue box are hardware. And so on top of the hardware is the infrastructure. It's like the storage networking and the orchestration and open stack and the operating system, like Linux. So we are using Stunning X to replace open stack. This is the options reference. We can use any options to, any alternatives to replace the box. So we are going to use Stunning X to replace the open stack part. And on top of that, it's an ONAP. It's called the NFV and NFV orchestrator, NFV management, life cycle management. So ONAP is recommended by AT&T for the orchestration and management, NFV orchestration and management. So for the up there, a lot of uncertainty because most of the components are not well defined. It's an opportunity to contribute projects, contribute ideas into the different boxes. And CICD and also L-SHIP are also in the box for the overall department and infrastructure life cycle management. I heard from them. So this is the brief introduction about the old acrynel. So this is an umbrella project. So it's a framework. So we need the other projects to be added into the framework, added into the umbrella project. So what about ONAP? ONAP is created last year, in March last year. So there's two projects that were merged into ONAP. One is called ECOMP. The AT&T open source, they are internal. Some of the internal components in their product is called open ECOMP. And the other one is open source community, it's called OpenO. So they merged two projects into ONAP to provide the capability for the so-called NFV orchestration and life cycle management. So ONAP is a model given for the network automation. This is a big feature as they announced. So they are using TOSCA and also so far they can also use HEAT to orchestrate the VNF or CNFs. So ONAP is a platform to orchestrate the VNF, say SDN environment. It's composed of a bunch of components. Most of the components are responsible for different responsibilities. And I can say that most of the components are tightly coupled. So some of the components you cannot run ONAP without some of the components. So it's a composition of several applications running in ONAP. And ONAP is divided in two parts, divided into two stages. One is called design, design stage, and the other one is called runtime stage. This is a detailed architecture of ONAP. So for the design time, design time, this admin, the telco admin can use the GUI to design the service. The service contains many virtual machines corresponding to a virtual network function. Services can be defined in the design time. And on the right is a runtime. So they will call the... The major code flow is... Design time is for the administrator in the telco carrier to design the VNFs. And on the right side, the SDC will load the template, the so-called modern driven type of template to launch the VNF or CNFs. And in a way to launch VNF or CNF, there's two options they have, because the projects come from two parts. One is from AT&T, the other one is from China Mobile. So there's two parts to launch the VNF, to launch the virtual machines. One is called SO application, it's called APVC application controller, and then called the multi-cloud. Then multi-cloud will call the OpenStack or other clouds to launch the virtual machines. The other option we have is called VFC, virtual function controller. So the SDC and the policy can call the virtual function controller, and the virtual function controller can call the multi-cloud and call OpenStack to launch the virtual machines for the network function. So this is the brief workflow for in the own app. So for the design time, so we have many... So we can define many virtual functions. So we can upload the virtual function models. Each virtual function model can be described by a TOSCAR template. It's like a YAML file. TOSCAR template. And in the design time, they also... It's like a Google Play or like an Apple app shop. So there's component to make a place for the VNF vendors to put the VNF on top of that. So the design time also component to validate, to assign license certified virtual functions. So the telco carriers can use the VNFs. It's like a marketplace. So... And also design time can create categories and also distribute for instantiation. So this is a design time. It's very... It's very small part in the overall architecture. And on the right... On the wrong time... So this diagram shows the two parts, two options. One is as local VFC, 3.9.1 and 3.2.2. So called VFC and Mardi Cloud. And this is the options for telco carrier can use. OpenStack is one of the options. We also can use the wind river. We also can use the wind well. We also can use the Microsoft agile. Anything for the cloud. So the other parts we can go is SO2 called Mardi Cloud to go that another way. So long time is the major purpose of the long time. ONF long time is to initiate virtual functions, initiate those virtual machines. And also do the life cycle to monitor life cycle of the virtual machines. Like a star, stop, reset, scale up, scale down, et cetera. And also monitor the environment. So Staniacs is a top level OpenStack Foundation pilot project. I believe many people already mentioned that. And the Staniacs first release was released last month. And it's a new project and encouraging more people to contribute. So Staniacs software provides high performance low latency and high availability for Edge Cloud. So this is the architecture of Staniacs. Basically Staniacs is based on OpenStack. Most of the components are from OpenStack. And also we added some, the community also added some, the values are some features on top of that like configuration management, fault management, host management, service management and software management. And also Staniacs were on top of the other open source project like SAV. So we, I care more about the networking. So we are using the OBS dbdk, OpenVswitch dbdk and also use SRV to get better performance for Staniacs. So not to talk more about Staniacs because it's not a Staniacs session. And next one I will cover the network automation. This is a concept in ONAP. It's called a control loop. So control loop means that everything can be handled. Everything about the virtual machine can be handled inside ONAP. You don't need additional component. You don't need additional project to handle that. So including the design, you can design the virtual machine. So when I say virtual machine, it's refers to virtual network function. So you can design virtual machine. You can create that. You can collect information for that. You can analyze this data to get the status of the virtual machine. And also you can detect any failure or detect any warning. And you also can publish and response to the condition, when the condition is changed. So these are some components in ONAP that are responsible for the control loop. So control loop means the biggest difference between control loop, closed control loop and open control loop is for open control loop, you need the admin, you need the human to interact, to do some action for the incident. But for the closed control loop, so everything can be done automatically. Everything can be done by the system. So for the control loop, the ONAP can execute to, they call this enforce the one of the many actions to re-mediate the network condition because they have a monitor component to monitor the condition. And they also can detect a network, a new network conditions that have been corrected through actions. So this is a big difference. So for the long time, the control loop is to choose and create micro services to make a control loop because every component, every morning the component or policy component running inside micro services. So running as a container. So also model, so you can also specify the model file to be distributed in a long time for the control loop. So the component will recognize what model describes it. So more about the control loop, for a long time, you can configure it, the control loop, driven by the VINF or CNF. You also can trigger the deployment of a control loop, including trigger the API exposed by the DCAE. DCAE is the monitor component in ONAP. And also you can do the lifecycle management of control loop. You can monitor the status of the virtual machine. You can monitor the status of the virtual function to trigger some actions for the unusual status. So this is a working flow for the control loop. So DCAE is the monitor, policy is the major part to trigger or to launch the virtual machines. So policy we call FCSO or other components to launch VINF and also use the DCAE to monitor the VINF. When something is wrong, the policy, because policy has some defined policy, the policy will trigger the action to through the FC, through the SO, so for the VINF, or probably restart, probably do migrate to migrate to the virtual machine, migrate to the VINF. Well, anyway, but do actions predefined in the policy to get a new condition, new network condition for the VINFs. So this is the diagram, the workflow for to explain what the control loop is inside the ONAP runtime. The other way we are doing is we call the HPA. So there's a critical path for the HPA. One is policy, OF, because we are working with China mobile, so we choose the other way, VFC, VFC is from China mobile, FFC is from AT&T. So we choose the other way, VFC to run our experiments. So policy, OF, VFC and the multi-cloud is the four critical parts, four components in the critical path of HPA. So what is HPA? HPA is called hardware platform awareness. In OpenStack, there's also a similar concept. This is a concept to describe the hardware capability to have the ONAP or OpenStack to understand what capability can be provided by the hardware. So there's two focus areas. One is detect the capability, the other one is the configuration capability. HPA requirements is a part of ETSI NFV descriptor, the VINF descriptor is a part of ETSI NFV, because the NFV, VINF cares more about performance, since it's not running on the bare metal machines, it's running on watchable machines. So care more about that. So this is a sample, shows how you can regard it on the right part as a sample for the hardware, because it shows how to describe the requirements of watchable machines. The red box shows the HPA requirements, including the memory size, the number of CPUs. So besides those, we have complicated requirements specification, like I can require SRV, I can require dvdk to run my watchable machines, to run my watchable network functions. So I can specify the requirements inside the TOSCAR model, use TOSCAR as a language. So the policy will load the TOSCAR, the policy will translate the TOSCAR model into a language which the machine can, the system can understand. So on the right part is a policy distribution, you can see the policy distribution box, the blue box, the data from SDC. SDC is the GUI, it's called a service design center. It's a GUI for the admin, for the human to design the services, to design the watchable network functions. So as the policy distribution loaded the TOSCAR model, and tell the policy's decision and the enforcement framework, and then they call OF. OF is another component in ONAP, OF tells where to place the watchable machine. So first, these are the five steps. The first step is we have send the requirements, homing, homing means placement, send out the placement requirements to OF. So OF will pull the requirements from policy, because policy will read the TOSCAR requirements, right? So OF will track the A and AI database, it's a database to save the how many, suppose the open stack is used by ONAP. So in a database, the ONAP will save how many flavors in the open stack, how many flavors ONAP can be used. So OF will check the database, check the data, where is the SRV, where is the host to satisfy the needs of a memory, a specific memory size and a specific number of CPUs. So OF will check the database and also number four is OF is checking the multi-cloud to talk with the open stack. So is it feasible to launch the VNAP on the target? So in the last step, OF will return the homing, the placement allocation to VFC, and VFC will call the multi-cloud, multi-cloud will call the open stack to launch the VNAPs. So if a person specifies requirements like I need SRV, so OF, so policy will translate the requirements. OF will understand oh you need SRV, so check with the database where the SRV is. So OF will find the target and will call the VFC to call multi-cloud, to call the open stack and to launch the SRV. We know the SRV is specified in the flavor as an actual spec, right? So we can, OF will create that to launch the VNAP where the SRV is offered on that host. So we also do some modification in the multi-cloud. The multi-cloud is a layer to, seems like the multi-cloud can call open stack, different versions, multi-cloud also can call the river distributions, multi-cloud also can call VNWELL, so I also mentioned that multi-cloud can call the AWS or call Microsoft, right? So multi-cloud is a layer on top of the cloud solutions, IAAS solutions. So we'll do some modification to have multi-cloud support starting X. We add a plugin inside of the multi-cloud framework. So before the use case, we'll get that question, can ONAP work with starting X on the kernel to launch virtual functions for the edge? So we try that out with the virtual CPE. So virtual CPE is before I go to the virtual CPE, there's a concept to describe what the virtual CPE is. So this is a concept called residential gateway, the RG. RG has two parts, it's called BRG. BRG runs at a home at the community, and there's another called a virtual gateway. It's run on the core network on the data center. So in this diagram, so the BRG serve as switching, L2 switching. So the devices at home can connect to the BRG, the virtual BRG to link through the logical subscriber link to the VG, and the VG help them to connect to the Internet to WAN. So this is the virtual CPE diagram. The virtual CPE is not a single virtual machine, it contains a bunch of virtual machines. So the virtual machine work together to offer the network connectivity service to the user. So as I mentioned, this RG is distributed between the on-site devices and also edge cloud-based components. So this is a statement to introduce virtual CPE. So virtual CPE is a way to deliver network service such as routing, firewall security, and virtual private network connectivity to enterprise by using software rather than dedicated hardware. So this diagram shows that the only left part is access is corresponding to the home network. Aggregation part is corresponding to the LSL, the logical subscriber link. And on the right part corresponds to the VG part, virtual gateway. So virtual CPE contains VBNG, VHE, VDHCP. Depends on the usage, depends on the user scenario. It contains a bunch of virtual machines work together to provide services. So with virtual CPE, we don't need a hardware device to connect to the internet. We can use a simple, cheaper hardware running virtual machines to provide the service for the home to connect to the internet. And this diagram shows this is the own app. It's the own app that components work to launch those virtual machines. So this diagram shows more virtual machines like a VBNG is aggregation inside of home network. And VGMMUX is aggregation virtual functions to serve in call data edge or data center, the network. So this is our family. This is another family. They also have their separate BRG to connect to virtual BNG. And virtual BNG connect to the VGMMUX. VGMMUX connect to the VG1, VG2. So VG1, VG2 help to connect to the web server and on to the internet. So in this scenario, VCP scenario, it's help to launch all of the virtual machines to serve the server users. So this is the environment we have. This is introduction for the VCP. We use own app and starting to launch those virtual machines. So we do the experiments so that to prove that can work. So this is the environment I try the mobile app for the own app testing. SunnyX is one cloud and we have another cloud. We have another cloud called WindRiver. So to do the to launch different applications. WindRiver open stack and SunnyX are used to launch virtual CPE. And we are well not sorry, another WindRiver distribution is help to launch the so called we virtual LOTE. So SunnyX is help to launch the VCP. So this is the test plan for the program. So own app, this is only engineering and testing. So own app R3 is about to be released in November, in the middle of November. So right now it's not released. But in this release we already add the HPA support specifically SRV support inside of own app. And for the POC we add VCP with own app. We use own app to the life cycle management for VCP. And launch by own app and also by SunnyX because own app will call SunnyX in multi cloud module. And SRV is enabled. And the integration testing is ongoing because it's right before the own app release. So it's ongoing at China Mobile but at local we already launched the user scenario. Another POC I'm going to introduce is this one, the FlexRAN. So if you are interested in the scenario you can go to the Intel demo booth. So they already have a POC to use own app to launch, use own app to call the SunnyX to launch FlexRAN for the edge usage. So you can dig more at the Intel booth for this demo. We have the same team to develop the different demonstration because test plan are different. China Mobile doesn't care this scenario. China Mobile care the other plan, the VCP plan and VALTI plan. So SunnyX R1 released last month and we also use the open source if you are interested in that you can download the source of VCP or open source to try own app and SunnyX and VCP. After own app release, after middle of November otherwise you need to do some more work because we are also on the demonstration testing. But for the POC you can take a look at that. I think that's very much of it. Thank you for coming and thank you for listening my presentation. Any questions you have? Thank you so much.