 Hello, everyone. Thank you for attending this session about enhancement on OpenStack networking for carrier cloud platform. My name is Jun Makishi, a network architect of NTT Communications. I have engaged to NTT SDN Cloud Project for four years and contributed to launch multiple SDN-enabled service in production. And currently, I'm leading a network part of the Cloud Project to develop a brand new cloud platform named Next Generation Cloud Platform. Today, I'm going to describe how we enhanced OpenStack networking for this carrier cloud platform. This is the agenda of today's session. First, I will make a brief introduction about our Next Generation Cloud Platform and its architecture. And then for today's main topic, I will describe how we enhanced the OpenStack networking by combining the vendor solution features. And next, demo and conclusion will come next. So for the introduction, I will describe our new cloud platform and how we are going to integrate OpenStack and vendor solutions. So this is the concept of our Next Generation Cloud Platform. And well, actually, Mr. Klihara will describe its detail and the service concept in the next session. So I won't go in the detail, but I just want to insist that in this cloud platform, we will introduce SDN technology to connect multiple, various cloud resources, including public cloud, hosted private cloud, and bare metal server, and also storage service. And also, we are going to provide APIs to customers. And this is the brief system architecture of our platform. We have developed the platform based on microservice design, same as OpenStack does. And because I'm mainly leading the network part, in this session, I will focus on networking part and actually the network service. The network service, we take advantage of OpenStack technologies, which means neutron APIs. And this is the whole network architecture of our cloud platform. To meet our enterprise customer needs, we need to support connectivity for the bare metal servers. And also, we need to provide high quality gateways service. And also, we want to provide various NFP service to customers. And to connect each component, we use SDN controller. And on top of the controller, we are using our original orchestrator named Elastic Service Infrastructure. And this is the whole concept of our approach. We categorize two types of service in networking. One is cloud networking here. And second is the network application and functions. Since cloud networking is to provide connectivity between components, because it is simple, and neutron has a well-defined network abstraction here for layer 2, layer 3 networking, and IP address management, we would like to fully utilize open source technology neutron here. And also, for the second category, we define network functions here. And for this category, we believe that there are many various NFP or other appliance vendors. And they have competitive features on this. So we want to integrate such competitive vendor solutions to the open source cloud networking and provide the whole service as a network as a service to the enterprise customer. And this is the brief description of our orchestrator named Elastic Service Infrastructure. This is developed by NTT Innovation Institute, Inc. And this was originally developed for software defined one, but we enhanced its capability to support cloud platform. Because this ESI has a powerful feature to enable us to integrate BNF easily. I will describe this later. So from this section, I will describe the difficulties we faced on integrating open stack networking and vendor solutions, and show some examples of our approaches. Well, before going to the details, I just list up the fit and the gap between neutron and our use cases. We have many use cases to meet enterprise customer need, like network connectivity between VMs or bare metal server, internet gateway, VPN gateway, and NFP. We found that we can fully utilize the neutron functionality for the networking between VMs. But for the others, we found that we need to extend or add APIs to meet our use cases. I know that there are several ongoing discussion and project to meet such features, but we believe that such project is not matured yet, and also it doesn't perfectly fit to our use case. So I will begin describing our approaches First, the most fundamental functionality we need to provide later to connectivity between leaves and also virtual routers, though I didn't describe it here. But we don't want to expose the under later policy to customers. Well, I think this is a same approach as neutron does. We introduce a virtualized overlay network and fully utilize the network abstraction which neutron does. To be specific, we apply the same idea of the neutron using network, subnet, and port resource to customers. I think this is quite simple enough for customers, and I like this approach because users don't need to think about the under later policy, but just need to think about his or her own network technology. But for the Bayrameter support, we couldn't use the current neutron API to support it. Enterprise customer wants the Bayrameter server for their mission critical applications or, as they say, a hybrid cloud hosted private cloud. And if customer want to build their own cloud on top of our platform, they want to decide the Bayram segmentation by themselves. But I know that we have, in neutron project, we have later to gateway discussion. But as far as I know, it doesn't support to dedicate the Bayram segmentation to customers. So what we decide is to introduce new resource named physical port. This is a obstruction of Bayrameter server nick and fully dedicated to the customers. So in this case, user has a Bayrameter server's physical port as a network resource. And by choosing the VLAN ID, user can freely connect the Bayrameter physical port to their designated network. And for the gateway support, as you know, in the cloud network, all of the traffic's, I mean, the gateway is the point that many traffic's converged. So we need the robust, stable, and high-performance gateway for the enterprise customers to make sure that the customers won't face service outage. So for this approach, we just use the physical router in HL style as we have done in many data centers and introduce a new resource named gateway as an abstraction of pair of router. I know that there is a resource named router in Neutron. But rather than using that model, because we need to add more gateway-specific feature on this gateway resource, we made a completely separate resource for this. And one of the gateway-specific feature is the multiple gateway option. Well, as a network service provider, we have many types of external network to be provided to customers like internet or MPS VPN. And also in the external network service, we also have many types of service glade, like 10 megabps guarantee or bestoy for 100 megabps. And what we want to do is to let customers choose what external network they want to use. And so what we do is to provide a resource to be stopped by customers about external networks and let them choose the external network and instantiate the VPN or the internet gateway. In this case, a customer can connect to their branch office or central office by a secure MPS VPN while they can provide some web service through the internet service. And next, and one of the most important topics is NFV. So we want to support customers to boot and monitor various virtual network functions, maybe BNF. But as you may know, booting and managing BNF is not handy enough for customers. So we want to manage such monitoring or booting stuff, onboarding stuff on behalf of customers. But we don't want to lose the vendor appliance features. So our approach is to provide managed BNF API as a BNF firewall resource and load balancer resource. But only manage the booting process and monitoring process on behalf of a customer and just expose the vendor API and UI. So in this picture, a customer will ask us, the elastic service infrastructure, to boot BNF and configured network configuration. But after that, a customer can utilize fully, enjoy the BNF specific feature directly. And this is a tenant view of customer, provided to customer. I think in Neutron, we have firewall as a service. But such firewall as a service applies firewall rule to hold a tenant network. But what we want to do is to provide a single component of firewall instance to let customers decide their network topology. But because we want to provide various BNF to customers, we want to reduce time to market to provide the features. But each BNF has different boot sequence, like they have different license activation process and so. So we want to easily onboard the BNF feature to our networks. What we use is the Gohan framework used in the elastic service infrastructure to define a BNF specific template easily. Well, for the data of the Gohan framework, we will have a session tomorrow. So if you are interested in, please check out the session tomorrow. What we do is to define the BNF specific configuration, we made a BNF template using powerful Gohan schema and template features. And last topic for the BNFV and the most important one is that we want to provide a network attachment to attachment feature to customers. As I described before, we want to provide the firewall or load balancer and so on as a single component, which user can decide the network to policy. But it means that we want to provide the customer to freely attach a network and detach from the network. But we found that if we just use the NOVA interface to touch API, NOVA just removes the interface and squashes the PCI number. Eventually, BNF try to follow up the squash and the interface number will be changed. It means that if we remove a network, then user network topology will be messed up. And it was quite a big problem for us because we want to provide free network attachment and detachment. What we do is to provide, we change our idea and not to detach or attach interface itself, but just provide a feature to unplug the network. So we define the BNF with a fixed number of persistent nicks. Like in this case, we have BNF, firewall BNF with six nicks pre-installed. And user can specify which nick to be connected to which network. Last topic for our challenges. After we provide the network resource to the customer, we also need to monitor the operational state and metrics because we want to know what is going on in the actual infrastructure and let the customer know that it's available or not and also how many resource they have used. To do so, we utilize the elastic server service infrastructures monitoring functionality to monitor the V router and the top of luck switches operational state and metrics. As a result, we can provide metrics per customer resource that and as a result user can know that how many traffic they have used per resource. And also if there's some infrastructure failure, then we can change the operational state of the resource and let customer know that it is not available. We believe that this approach highly improve the quality of our service, it improve the visibility of the network service. Well, and I prepared a simple demo to describe our approach and in this demo, we will create a network resource one by one through API. This is the demo network policy. This demo goal is to that diameter server connect to the external server over internet. Because this is done in the lab, please assume that this 192.168.0.016 is the public IP range. And to connect the server to the external server, we will create two types of network. One is the internal network and second external network. And for the internal network, we will assign private IP address and connect the diameter servers via an interface, not a physical interface, and also connect the firewall. And for the external network, we will create internet gateway to make sure that the packets will go to the internet and subscribe the public IP from the gateway service and attach the firewall. Please note that I haven't described this feature, but we also extend the gateway feature to let customers subscribe a block of public IP. So in this case, we will ask the gateway service to have a block of public IP and assign it to this external network directly. This is the demo. So first, we will create the internal network using a Neutron Client. This shows that our API is compatible with Neutron API and we can use the Neutron Client to create a simple network. And also we can use the Neutron Client to create a subnet to assign a private network, a private IP address. And then we will connect the bare metal servers via an interface to the internal network. This is our completely brand new API named PhysicalPort. And this PhysicalPort is dedicated to customers or customer can see that what kind of physical port they are using. And then using the PhysicalPort as a device ID, we will create the port resource specifying the segmentation ID, V9-D100. And also specifying the network to be attached. This is the result. Well, this is also compatible with the Neutron API. We extended some of the attributes, like segmentation ID and segmentation type, but the rest of the resources is same. To prepare the connectivity test, we will go into the bare metal server and configure IP address, configured, and ping to external server. But because we haven't created a firewall on the internet, of course it's unreachable. Next, we will create the internet gateway. So first we will list up the available internet service using the API. And we only have one single internet service provider for this demo. And also next user can list up the QOS option to connect, create the internet gateway. In this use case, we can, we only have a best-of for 10 megabps option here. And to create the internet gateway, we specify the service ID we listed before, and also the QOS option we saw it. And the gateway instance has been created. And before that, creating the external network, we need to subscribe a public IP from the gateway service. We just specify how the links of the submask links of the public IP to subscribe from the gateway service. Then our network service will automatically allocate available public IP address to customer. In this case, 192.168.360.0-28 is the public IP block offered to customer. And again, using the neutral client, we will create a network and subnet. And here we specify the public IP address, which subscribes to you. And then we will attach the gateway to the external network. Just we need to specify what IP address to use for the internet gateway. And last, we will create the firewall. In this case, we specify two networks, internal network and external network, and create the firewall instance. In this model, we only support two NIC, and each of the network will attach to external network and internal network. Let's check the connectivity from bare-meter server to external server again. Well, it takes time to boot firewalls, so, yeah. Finally, the bare-meter server can connect to the external server now. Well, this is all for the demo. I think it's too simple, but I think you can easily know that we introduce and extend Neutron API to meet our enterprise customer needs. So, conclusion. Let me recap. We have integrated OpenStack networking with vendor solutions, because we need to meet enterprise customer needs of high quality, availability, and scalability, and flexibility, and so on. To overcome the difficulties, we extended the API as I showed. And also, please note that we take advantages of hardware and software appliance. We don't stick to the one single technology, but we will choose hardware, either via hardware or software, upon per request. And, of course, because we have created a brand new API based on our original orchestrator, we are willing to feedback our outcomes to the OpenStack upstream. And, actually, I like the Neutron-Prowl approach to adding extensions, but I feel that we would like to have a more faster process to introduce new service API as we done. And, also, just one thought, that introducing OpenStack in production has not only required us, not only a technical approach, but also from a perspective of deployment process and vendor support, we face that there are some barriers. Still have a barriers. So, we like to keep watching and contributing to the OpenStack community to make sure that OpenStack networking can also meet the enterprise customer needs. And, this feature will be available in the Next Generation Cloud Platform we were installed in the production. One advertisement. One of the key technology we used in ESI, named Gohan, it will be open sourced and the detail will be described in the Thursday session. So, please attend to this session if you are interested in it. Well, and also, we have entity con booth. So, if you have some questions, you can ask now, but after that, I will be around there. So, please ask me questions freely. Thank you for listening to my presentation. Any questions? I think I have 10 minutes left. If you have, please directly access to me. Thank you.