 My name is Akagi Komei, and I am a network engineer at Annotative Communications Research and Development Team. Currently, I am in charge of designing and planning NFV service. Also, I evaluate test-by-networking, composed of virtual appliances and open stack. Today, I will talk about these topics. First, I will introduce our proof-of-concept and NFV use cases. These points will help you to understand the vision of tele-closed networking services. Also, I will introduce the advantages of NFV from two viewpoints. One is user's view, and the other is tele-closed operator's view. Next, I will share how we deployed and got advantage of open stack in our poke. Some questions and requirements will be introduced. Why do we need open stack? What issues or difficulties of open stack we face currently? Finally, I will introduce next steps of our evaluation to realize our NFV services with open stack. The basic question is, why do we need to deploy NFV to our network platforms? The reason includes two viewpoints. First, we believe that we can reduce expenditure which comes from networking capital and operations. Second, NFV will enable every user to customize networking service flexibly by themselves. Our concept is the ideal model to realize two viewpoints above. We will build this poke with open stack to verify the requirements in terms of both operators and users. We will introduce our proof-of-concept. This video shows simple project for our proof-of-concept. We are able to deploy virtual CPE at any sites, cloud, Kera network, and user site. The user site, for example, branch office or shops like a convenience store around the world. The virtual CPE is an equipment set at the edge point of these sites. In general, NFV services, virtual network functions are aggregated in cloud site. However, our concept is delivering VNAF to anywhere on any virtual CPE. Because we wish to realize functions as much as possible. These are a part of the example VNAFs we deployed. Users can select networking service flexibly. What I mean by flexible is, for example, they can request one acceleration service to be deployed to improve their experience only next one hour. After the hour, when users no longer need to use it, they can stop the service and VNAF will be undepployed. If I mention about a one acceleration or DDoS defense service, in general, VNAFs should be deployed to heterogeneous sites and be able to establish sessions. Of course, we should be able to perform all VNAF deployment and federation procedure automatically. Our activities to create our POC are as follows. First, we built our POC to share our concept. In the first step, we deployed OpenStack as Virtual Infrastructure Manager with basic components and procedure. We also introduced some NFV use cases of Tercob carrier. These cases will describe how much service flexibility will be needed on our platform. Second, we are evaluating many virtual network appliances on our testbed. These evaluation results will enable us to realize NFV networking service in carrier network. In our POC, the operation workflow is required to support massive site and various devices. So, we finally had to evaluate and install Orchestrator. We considered two ways to support Orchestration. The one is installation or commercial products which cooperate with OpenStack and other NFV platform. The other is to deploy fully OpenStack-made components. We continue to verify these ways to support our workloads. We introduced some approaches to OpenStack deployment for NFV platform. This is well-known framework of NFV. Etsy NFV working group has released this reference model to standardized NFV framework. Of course, we also throw this model in our POC. We deploy various BNS to our platform. These functions will be provided as virtual appliances from vendors. Because in our case, we need advanced networking functions which are not supported in Neutron. So, these VNRs will be deployed as virtual machines. When we deploy OpenStack components, we focus on management layer, NFVIM, VNRM, and Oxyration. Let's apply each OpenStack component to respective management blocks. Today, we focus on management of compute and networking resources. So, we basically use Nova and Neutron as NFVIM. Nova handles compute resources for virtual appliances with own scheduler and compute drivers. Virtual CPU and RAM resources will be served for virtual appliances. The amount of resources to assign completely varies by 4 each appliances. So, to find suitable resources, we had to learn some commands like Nova Flavor Create, Nova Boot, Nova Flavor Create, Nova Boot, so many times. Neutron support to handle networking resources layer 2, layer 3, or higher. We utilize Neutron as underlay networking controller with Neutron RBS agent. Unfortunately, Neutron layer 3 supports a slightly incompatible for our NFV use case. Also, we have deserved post-security groups for some reasons. Recently, OpenStack has been expanding on use case. Today, we are interested in supporting higher management layer in OpenStack. Heat can give us descriptor with YAML format to assign virtual resources and to deploy VNFs. This feature is good for managing resources and networking service to each site. It's also good to utilize heat in combination with Ansible or Python Zinger 2 for dynamic deployment. TACA is at the early stage of project to support VNFM and orchestrations. There are some sessions about TACA in this summit. Of course, we focus on this project for NFV use case. Next, we introduce our test bet. In NTT communications, we manage SET-ISP and cloud platform. Our test bet consists of backbone network and cloud and have connectivity to many sites. Many engineers utilize this platform to evaluate various network techniques in NTT group. This test bet talks dynamic routine protocol and level setting and also peer-to-external AS is established. We have been evaluating a lot of virtual network appliances. We deploy various virtual network appliances into our test bet to realize carrier-grade NFV use cases. Today, we can provide many networking services with virtual appliances on our test bet, like transit router, carrier-grade nut, and so firewall, NPR-srouter, for example. We introduce some requirements of our POC. First, various VNFs will be deployed into SET-ISP sites. SET-ISP means it depends on for each VNF property. Two important requirements are resource reservation and safe and stable deployment. No matter if there is some rich hardware resources or small-sized resources. Next, we wish OpenStack to have interoperability to enable other NFV platforms like VMware to be controlled. Today, such support has been expanding for VMware or Docker. But it seems that networking support is insufficient. The orchestration in Turkish NFV is a serious problem in particular of our use case. Currently, it is at an early stage to support this in OpenStack. Monitoring and failure detection from physical layer and virtualized layer is also important. In our POC, virtual CPEs are installed in various sites. When we deploy VNFs with OpenStack, we always worry about capability of virtual CPE for each site. In cloud site, there are community server which has a lot of CPUs, flash memories, and over-gigabit ethernetics. Usually, these servers have a uniform hardware specification. They have capacity for many VNFs, deployment, and talent deployments. It's roughly the same case in carrier network sites. But in user sites, we can only deliver small-form boxes to run VNFs. These are param sites server and power consumption is very low. We choose these server in terms of user convenience. There are some challenges to realize unified management or wide-range amount of resources and VNFs. In terms of the number of servers to manage, it's difficult to realize unified management of respective virtual CPEs. They are widely distributed and made of complex topology over layer-3 networks. This video shows operation workflow in our POC. First, when VNF is given, we verify its functions and performance, and register VNF to service pool as available network service. When service plan request came from users, we build service catalog for users. These catalogs include information of VNF, deployment site, and network project for services. Finally, we start monitoring for resource usage and failure detection. Next, we introduce three NFV use cases of our POC. The first use case is network service federation between cloud site and user site. We choose virtualized one acceleration for this example. When you visit foreign country, for example in business server, you may want to get contents from cloud site in your country. However, in general, you will take a lot of time to get contents because of latency in wide-area network. The one acceleration is one of VNFs. This VNF reduce latency in one and improve users networking experience. In this case, the suitable site to deploy VNFs are cloud site and user site which you access. We show the short movie in our POC. It will help you understand our deployment workloads. Users access to our customer portal when they need any networking services. In this case, user choose one acceleration service. The left red area describes your country. Users want to get contents over wide-area network. This file is ISO image for large. One accelerator is not deployed yet. So, the latency is user suffer from latency. So, download speed is low, about 2 hours. Okay, users want to accelerate it in one. User choose and drag and drop one accelerator to deploy. In this case, user choose cloud site and user site to deploy VNFs. One accelerator is deployed in cloud site and user site. Sorry. The both appliances start and running. Okay, one acceleration is established. So, in the same case, one traffic is optimized. But unfortunately, this portal is built by full-scratch, not customized horizon dashboard. In four or two scenarios, we can deploy VNFs the same way. The second use case is all selection of sites which VNFs will be deployed based on VNF properties. We choose two VNFs in this example. The first one is contents cache service as VNF. Contents cache gives us two FCNC. Improvement of user network experience and reduction of traffic in carrier networks. To utilize cache FCNC, we deploy with VNF closer to user site. In this case, not in carrier network site. The other one is URL filtering. Network administrator will make rules to which packets to drop regarding harmful services. In this case, with the suitable site to deploy VNF filter is network aggregated site like carrier networks. In the tele-closed carrier network, DDoS attacks has increased these dates. A serious problem and should defense user service from these attacks. The third use case is VNF service federation between cloud and carrier network site. We introduce DDoS defense service as VNF for example. When user service got DDoS attacks, network operator mitigate malicious packets automatically and recovery user services immediately. DDoS detection service will be deployed in cloud site to analyze packet frauds. DDoS mitigation service will be deployed in carrier network site to recovery user services. These VNFs have to federate between different sites, cloud and carrier networks. Next, we introduce how to utilize OpenStack components for our environment. But in today, unfortunately, deployment status is limited yet. In one batch of CPE, we basically use Neutron and Nova. We set up Nova with basic procedure. For performance tuning, we configure CPU and PCI passthrough settings in all sites. CPU allocation ratio is low, about 1.0 or 2.0. Huge basis settings depends on RAM specification for each site. As you know, virtual CPU numerous problems cause serious performance issues in virtual machines. Especially, virtual router performance is significantly affected by this problem. We have to carefully tune the virtual CPU mappings to avoid this problem. In killer and labor theories, Nova supports more policies to no-mask during. It's important to deliver VNF into various sites. In networking setup, we use Neutron and OpenVCH. We need Northbound interface of OpenVCH controller and IPAM as Neutron support. So, we deserve a layer-3 agent and that function is delegated to various router appliances. Now, I'll tell you why desirable security. The reason is that some routing packets are dropped by the rules. Nova and Neutron API endpoint are deployed into all sites. To handle and array network for each different case. We deploy API endpoint of Nova and Neutron to all sites because of constraint of zero interactions. I prefer that one Neutron API server can manage agent regardless of management segment. But we have no idea to straightforwardly solve that case. Anyone who knows please tell me good solution if you have. To manage various VNFs, we embedded properties for each appliance images. This property includes information about vendor, version, version of image and appliance types. It enables to automatically select appliances from service type which user choose. In the last stage of our poke, we have to introduce NFV management and orchestration called NFV Mano. We add right the components support our operations. Currently, we evaluate some vendors products for Mano. As a result, we got some difficulty to support our operations. We consider it difficult to support completely operations of our specific due to issues of flexibility. Also, there are some vendor logins. OpenStack will continue to expand on use case. In fact, Tucker and Mistra were emerged. Tucker is a project of support VNF, sorry, VNFM. This component gives us descriptive to determine VNF behaviors. Also, this project team declared to support life cycle management and orchestration in Mano layer. We verify this feature as Mano for our operations. If OpenStack supports full stack layers from VIM to Mano, there are many advantages, especially openness, flexibility and interoperability for VNFs. In our poke, we have already verified scalability for compute. Novel has good capability to handle virtual appliances in distributed sites. It's possible to manage various VNFs for each site. Novel can manage VNF instance on VirtualCPE FlexBree. If servers and compute agent are connected in layer 3, host aggregate and availability zone is supported. Novel also supports cells from more complex cases. In contrast, it's difficult to provide site-specific network flexibility with one neutron server. Currently, one neutron server can manage many agents, including OVS agent. But in our case, the underlying networks with VNFs are different for each site. Neutron doesn't have scalability to manage under layer in this point. In previous step, we integrated OpenStack into our poke. Unfortunately, it's difficult to completely realize our poke with current OpenStack core components. We introduced some issues, or I say, point to be improved about OpenStack to meet telcos requirements. In user site, we manage resources directly because of low capacity of VirtualCPE. This feature will be supported in Novel from now on. In case overcapacity happened for VNF deployment on user site VirtualCPE, we may have to consider dynamic VNF migration to other sites. In case of VNF deployment over regions, we need the straightforward way of VNF synchronization between different regions. But current schema for resources is strongly tied with region which has same management segment. Our poke consists of many types of sites. So, there are sites which is different from native OpenStack, like VMware. We want to operate networking with Neutron to all sites. But in VMware, networking model and Neutron schema are incompatibility. There are a lot of new features in Liberty RIS, including support of networking vSphere in Neutron. The Nova flavor is widely used to catalog compute resources. We have to pay attention to trend of the other flavor, like networking flavor blueprint. In the future, we want to be able to customize flavors, including compute and networking, to make catalog of our services. We have a worrisome problem. What components have a role to describe such catalogs? Nova, Neutron or HueTemplate? We are interested in some components which relate to our poke. For example, there are some important features and blueprint in HueT. We focus on outscaling and dynamic flavor, because the future enables us to more flexibly control networking services. We also focus on Neutron's flavor blueprint. OpenStack is the most strong platform in terms of cloud computing. However, in the process of activity, we first some gaps between current OpenStack functions and tele-codes operators. Currently, the main player who talks about NFV use case is networking vendors. They contribute upstream requirements early. Service providers are also main players to share user examples. In OpenStack community, it's obvious that there are few requirements in terms of tele-codes operations. If more tele-codes operators join the community to share the use cases, we consider that OpenStack for tele-codes NFV will be accelerated. Today's presentation, we introduce our poke in tele-codes NFV. Our concept is model for users and tele-codes operators. To build our poke, we deploy OpenStack with basic procedure. We also verify scalability in terms of compute and underlay networking. There are some requirements and issues about OpenStack deployment in our case. We find that we need to verify every stage project in addition to basic components. These are our next steps to support tele-codes NFV use cases. We will share more use cases to related working groups. Also, our poke results will be shared to cover a wide range of tele-codes NFV scenarios. We will try to deploy components in our stage to verify and enhance them. We will accelerate contribution of upstream to support our stage project on real-life OpenStack co-operate NFV use cases. This presentation is all. Thank you for your listening. Please contact me if you want to visit our booth P9. Thank you.