 Okay, thank you everyone. My name is Teran Zhou from Huawei and this is a joint presentation from me and Albert. I would like to give the first part presentation and then invite Albert to give the second half. Okay, let's get started. Enjoy is popular. I believe many of you are using an Android phone. Huawei is customized from Linux for mobile devices and Linux as a general operating system is customized from Linux for mobile devices and Linux as a general operating system is also customized for PCs, servers and so that it can meet requirements from various users. In this sense, OpenStack is similar to Linux as a general cloud OS. It's a candidate for the virtual infrastructure manager in NFV architecture and NBI is essential to enable the application and the model innovation. This topic will discuss considerations and prototypes on customizing OpenStack for telco NFV from Northbound point of view. The OPN3 is an open source project. It's going to provide a reference platform for NFV. It follows the ETSI-NFV architecture and works in close collaboration with a number of upstream open source projects. This diagram illustrates the architecture and scope of the OPN3. OpenStack is a competitive candidate for VIM. VIM NBI includes two reference points, the interface to VNF manager and the interface to the orchestrator. In OPN3, there is a project named model-oriented virtualization interface. It's dedicated for VIM NBI. Existing OpenStack API are IAS service-oriented. We are going to provide a more abstract model-service-oriented NBI alternative by extending the general cloud platform. It simplifies the orchestrator and VNF manager and also makes it easy for resource access, connection generation, flow identification, policy operation and so on. We will provide a general VIM NBI layer so that various NBI models can be quickly implemented and inserted into the platform. Meanwhile, we will cooperate with upstream project so that those features could be part of the open source project like OpenStack. Like each operating system has both low-layer SDK and high-level application API, VIM should also consider NBI at a different level for different application requirements. There are intent-based NBI which are technology neutral and business-oriented. There are also functional NBI which are function-oriented and technology-related. The intent-based NBI is promising and very interesting. Intent provides a high-level description of requirements to the network with the obstruction from top down. Usually, it's technical neutral. For example, we do not need to see VPN, NPRS and any other protocols. It express what to do rather than how to do. It's obvious that intent NBI is simple and easy to use and could be platform and solution independent. As I mentioned previously, both the functional NBI and the intent-based NBI are useful. They are just for different users. For intent-based NBI, the target user are the business designer from Keras and IT service developers. For the business designer, they may want to quickly build new network services and for the IT service developers, they can integrate network resources and capabilities into their applications. This slide shows one mode that how the intent NBI can be used. Firstly, the business designer or the IT developer can use a set of consistent intent primitives to compose service templates. The templates are based on different scenarios such as bandwidth on demand, high availability, cloud, big data, and Rata, then end users from education or finance domain or web service providers, big data service providers, and game providers can select the corresponding service template. To design the intent models, we can learn from the intent expression in real world. For example, one may have the intent that I want to watch Harry Potter right now in the living room. In this expression, intent is composed of operation and object. In network and NFV domain, we also have similar intent expression, say, I want to create a DMZ, I want to insert a firewall service, I want to block the HTTP flow, I want to adjust the bandwidth to 10 gig, and consequently we get this general intent model. On the top level, intent is expressed as operation on object. The operation has the symmetric data on condition to actions with the constraint in network and NFV area. Int intent is usually node, connection, and flow. Node for example could be a firewall service pool, a layer-to-network, and the connection describes the connectivity among various end nodes. It could be point-to-point, point-to-multipoint, mesh, or the composition of the basic topology. The flow is the traffic on connections. It can be video traffic, web traffic, or tagged as VIP traffic. This is a service-changing example that can be modeled with the previous intent expression. We want to apply several network services between the VPC and the Internet. We can firstly create three service nodes, then set up the connection between the two end nodes, VPC and Internet. And operation applied to the connection is to go through the three service nodes. Then we identify flows to be placed on this connection and do the operation to steer the flow. It's totally topology agnostic. Then I would actually invite Albert to give the following presentation on some telco snarrows. Hello everyone, thanks for coming. First I would like to introduce a bit by myself. I am Dongfeng from Huawei Nanjing Institute. I have been doing networking for around 10 years. Here are some use cases we get from our virus customers. Here I will give four use cases. One is BGP, MPS, VPN, second one is service-changing, and third one is global topology. And the fourth one would be E2E, QS guarantee for virtual network. For MPS and VPN, we have abstract model as intent-based. You can see from this diagram that we can use OpenStack to control the piece. That is the physical infrastructure. Typically inside the data center user would have OpenStack controlled L2 and L3 networks. Inside the data center there would be a separate CE which stands for customer edge. This CE would connect to the PE typically owned by a service provider. Abstractly, we can see the virtual abstraction, like VPN service can be divided into a attachment circuit and tunnel, just these two parts. For the OpenStack operation workflow, we use neutral API extensions. We can create an attachment circuit in each tunnel network. And the attachment circuit will be treated as a logical interface. In the L2 networks, they could be represented as a set of reliance or bridge domain. Second we can create MPS LSP tunnels. This step is optional when using dynamic tunnels like LDP or ISVP. We can create MPS VPN service using attachment circuits and LSP tunnels. Connectively between attachment circuits in different data centers using LSP tunnels is for measure of LSP by default with MPS resiliency enabled. And we can use MAC withdrawals for faster convergence. And during VM movement or delegation and server consolidation, we use MAC withdrawal message. Here is the link to the latest BP. OK, next I would like to talk about service training. We have a project in the OpenOPFV, which is also known as VNF forwarding graph. Typically, in a service chain solution, there would be a control plane or service administrator. There would be an OpenStack service chain orchestrator and SGN pass controller. What is service chain? I think most of the audience must have known that service chain is all the list of value added service in a user-specified way. And since now we are moving to NFV environment, OK, the service chain architecture must be a must need to be adapted with the virtualized environment, OK. Here we would have an E2E orchestrator which contains service chain orchestrator and NFV orchestrator. And in NFV, we have VNF manager, which would control the life cycle of the value added service. And in the cloud OS, which is typically OpenStack, OpenStack is acting as a virtualized infrastructure manager. There would be a service chain controller, which is typically maybe some component inside Neutron. And the virtual service switch, which is typically OS, and all the value added service are connected to the virtual switch. And user can define policy, policy center, which can specify the chance action policy for user. And this policy would be downloaded into the flow classifier, which is usually we can use OpenFlow to classify the rules. And OpenFlow is just one implementation. Another implementation, we can use NSH header, which ITF has adopted regarding that. OK, here I want to explain a little bit about the flow, about automatic service provisioning in NFV environment. First user can use the E2E orchestrator to logically define the service chain. Then maybe user will request a new VNF to run that value added service. Then second step is request a new, maybe video option to write. Then the third step would be request a VRM, typically OpenStack, to allocate a new VM for that instance. Fourth step would be create a new VM and load a certain image and spawn that instance. And fifth step would be synchronize the video open instance profile and logical service chain definition. Maybe example like characteristics, status, capacity, address, and attached virtual service switch to SC controller, service chain controller. Sixth step would be according to the logical service chain definition and the new adding instance results and the service chain controller can generate service chain flow tables for the new adding instance and send to the virtual service switch. Here the service chain flow can be open flow or SEO or something like that, PBR, policy best route or anything like that. And finally the virtual service switch steers traffic to the new instance according to the policy. This is all intent best. And what we are doing in OpenStack is that since you may have known that OpenStack does not have the interfaces for service function chain, no open interface to integrate OpenStack with different vendors service function instance, both virtual and physical. And service function instance created by many existing service devices need to be included to provide service chain ecosystem and OpenStack umbrella and no well defined open interface to allow registration of third party service instance locator, flavor and capacity information. And a lot of vendors are developing and providing instances and currently OpenStack has no normalized interfaces for different vendors service chain drivers and service chain brings low capex and obnox by auto provisioning and steering different tenants flows through different sequences of service functions. The vendors would have to depend on provider drivers interface to openStack to get service chain functionality. Okay and here the interface between the orchestrator and the client would be policy best and specify user's intention of service function requirement. The SFC orchestrator translates the client's abstract policy best service function requirements from the traffic flow into the concrete SFC representation consisting of service function instances locator information. This API express an asset service function chain as a list of service instance and express traffic flow through the flow disk criminal data, okay. Here is the global typology, okay, since what we are doing in the onus which is the open network operating system, the onus as a controller has the global view future in the OpenStack as a VRM so as team controllers like onus have global view of network elements. In every case OpenStack being the VRM there should also be a global typology with virtualized node and link information. So I think OpenStack should derive some output from the ONF MBI and onus to form a good model of API, good model and API for typology. Currently if you use OpenStack Neutron you can see the typology diagram in the southern panel but there's no APIs for user to get the typology for themself. I think OpenStack should derive the model from OPF-V to provide such API, okay. Another the last thing we think which is very useful for NFV use case would be E2E queues for VINF, okay. As we all know that currently the queues would have a definition of a good metal, a silver metal, a bronze metal like that, okay. But in a virtualized environment the VINFs are just VMs running the software. If like this program is showing I have a VINF here on Hoster 1 and I again have a VINF on Hoster 2. But I want a connection between these two VMs which would ensure just emulate the physical link maybe 10 gigabit or 100 gigabit how to ensure this or the actual traffic may come soon with which and to the virtual fabric typically may be controlled by VXLan or VLAN how to ensure the E2E queues, this is a great problem. And these models and APIs are again needed for Neutron and the back end SDN solution should think about it and ensure to meet the NFV demands typically the parameters would be bandwidth latency and some OEM parameters, okay. To summary this presentation, okay we introduced the motivation of customizing OpenStack for Telco NFV and the northbound point of view, okay. And intent-based interfaces and functional interfaces for typical user cases more to go with the community support to build OpenStack to customize OpenStack for the better NFV Telco NFV, okay. That's all for our presentation, thank you all to come, okay. Any questions? Okay. I guess this is all for today, okay. Service chain controller, okay. Go to this page, okay. Okay. Service chain controller can be inside Neutron, maybe use Neutron itself as a service chain controller. And since we have a pluggable architecture, user can integrate third party SDN controller as a service chain controller. Maybe you can use OpenDelight or any other vendor specific SDN controller. It's all supported. Since the northbound API is the same and the service chain driver interface is abstracted. Please use the microphone at the back for questions, please. No problem. Hi. Hi. Just one question for the information model that you were thinking about for expressing the northbound interface, describing the parameters. Do you have a modeling language in mind already or have you implemented these? I think you can clarify it. Maybe you may relate it to this intent based models. Yeah. We are thinking about language to describe this kind of models and we have work named, we have related work named NEMO language and we have submitted this, submit the design as a draft to IETF and we also initiated a project in OpenDelight to implement this kind of intent expression language. Questions going back on the service chain controller. So you said that service chain controller or caster is part of OpenStack but for service chaining between V and Fs that may be running between in different OpenStack, doesn't it have to be outside the OpenStack? No, no, no. I think since we have the locator information, which would abstract the OpenStack generated value added service such as load balancer service and firewall service and third party controlled service function in business manager, okay, we have the locator information which we are specifying to which, with which that certain value added service is located. Since if you have an OpenStack generated say firewall as a service, it's back end may be the IP tables, okay, the IP tables would connect to a certain port of the switch. This we can ensure and we can generate topology accordingly and if you use the physical value added service or the value added service runs on VM which is connected to the, which is connected to the switch, all these scenarios are supported, okay. No further questions? Okay. Thank you all for coming. Thank you very much.