 Hi, everybody. My name is Haydus Sugiyama. I'm a senior principal technologist at Red Hat. I'm here with Seisho Yasukawa, NTT Network Technology Laboratory. Today, we are going to share with you new normal activity we call the Carrier Edge Pass. First off, I'll be sharing with you the concept of the Carrier Edge Pass. After that, Seisho will be talking about NTT project for Carrier Edge Pass. Probably you know about the BiModel IT. BiModel IT is a two-tiered IT operational style. SOR is a system of record, and SOE is a system of engagement. SOR allows for the creation of the system in the process, which is stable and predictable. Carrier-grade infrastructure development is one of the SOR styles at this stage. And SOE allows for the creation of the system in the process, which are agile and fast. SOE focuses on the DevOps and continuous improvement. We call the Kaizen in Japan. Carrier Edge Pass can harmonize the SOR and SOE style. Carrier Edge Pass provides an application DevOps environment for the main business player using the OpenShift container platform on top of the OpenSack NAVH computing infrastructure. That's based on the SOR style. There was a news. FCC is going to repeal the net neutrality. Maybe you know that. Most of the telecom carriers currently don't have computing resource and storage resource at the Teleco H node. That's why they need a big vibe in their network to the cloud, or somehow they want to control the traffic to the cloud. Carrier Edge Pass potentially can change the situation. Carrier Edge Pass can harmonize the cloud environment and edge computing environment. It will be decentralized, but it's interoperable with the cloud data center. Carrier Edge Pass can provide a cloud-native service at the Teleco H node. There are many potential use cases. Contest delivery network is a classic example for the cloud edge, as you might know. And hybrid-cast TB is a potential use case. You'll be able to add HTML5 format from the Teleco H node to the hybrid-cast TB that will receive the 4K broadcast stream along with local service providers data. AI is also a wonderful potential use case. We like that AI intelligent to the Teleco H node with GPU. We usually call the Edge Heavy Computing in Japan. And IoT robotics and connected car are the major use case. Automotive industry needs a huge data for the VTX. According to the Automotive Edge Computing Consortium, the monthly data volume between car and the cloud will be increased to 10 hexabyte by 2025. But most of the data are the geo-oriented data. For example, when you drive the car in the Austin, you don't need data for the Tokyo city, right? So there's opportunity to manage the geo-oriented data at each Teleco H node. Carrier Edge Pass can give you the flexibility to develop the new business on top of the Teleco H node. NTT is now spending a resource to develop the new architecture for Carrier Edge Pass based on the new business concept called B2B2X. B2B2X means entity business to the main player's business to the main player's user or devices. So main player, we call the middle B, can use the DevOps environment to develop the new service to deliver their users or devices through the entity's Carrier Edge Pass infrastructure. So any network resource and the security service will be provided by NTT for middle B. For further detail, Seisho will be talking about entity project. Seisho, over to you. Thank you, Hyde. Good afternoon, everyone. My name is Seisho Yasukawa from NTT. And I'm honored to have an opportunity to present to you our activity today. Actually, I'm belonging to NTT Lab. And I am a leading architecture team. And we are studying future network design for 5G IoT time frame. So I would like to discuss, present to you, our study result related to Edge Pass technologies. As shown in the figures, NTT want to collaborate with middle B partners, such as IoT service providers and contents providers to develop new market, digital market. So as for back end B position, NTT want to provide some collaboration platform to middle B partners. Network is a strong tool for the platform. But I would like to guess how carrier's network look like from middle B users. Network is very attractive because we have a lot of access line to the users. But I think, on the other hand, network would not be attractive because lack of flexibility and long lead time to tune up the service and service deployment. So I'm afraid that many people say that the network do not have to play more than a pipe. Is that really OK? I don't think so. By putting the function in the network, we can support for real time control for self-driving cars. We can support mass data processing for IoTMTM services. We can support flexible service composition for SDX service. Even middle B partner can off-road non-business logic, such as security issues. Security is very important factor to implement the service. But sometimes it is difficult to handle. So if middle B partner can off-road the security issues to the carrier, they can concentrate on main service logic. Then I would like to discuss the difference between the carrier grade and cloud natives. In the conventional, traditional carrier grade, a service is developed by full-scratched base. So dedicated equipment and dedicated service scenario is necessary. So waterfall-based service development is useful. But on the other hand, in the cloud native world, we can develop the service by customizing our template. So that means we can utilize common equipment, and automation, and workflow, and APIs. So agile-based service development is possible. And we are continuous service deployment is possible. I think these cloud native features are very attractive. So we decided to incorporate these features on top of our carrier-edge parts. This slide shows our service example target. We want to build services combining cloud component and network component. For example, we can construct a managed IoT service by installing IoT application within the public cloud and by installing IoT gateway within the Edge DC cloud within the carrier network. We should monitor network status, and we should change the network configuration automatically. This example shows we change the service chainings. We incorporate the DPI and the cleanser to check the malicious flows and eliminate the attacked packet. In this way, we carrier and middle-end partner can collaborate. We want to accomplish this kind of collaboration using this platform. As I said, flexibility and agility is very important. So we would like to introduce SmartPipe using the carrier-edge parts. As you know, conventional pipe only provides a connection between user and application and very stable. On the other hand, SmartPipe, with SmartPipe, we can utilize SmartPipe as multiple purpose and change the pipe condition very dynamic manners. For example, as for CDN, we can put the CDN cache and the TCP booster within the connection. And as for security, we can put the DPI and cleanser within the path pipe. And if we want to check the malicious flow, we can change the network conditions, configurations so that cleanser can eliminate the attacked packet. To realize this kind of SmartPipe, we needed to arrange necessary function on demand basis and change the network condition dynamically. So we need a new architecture. This slide shows carrier-edge parts architectures. We would like to, we will introduce edge parts within a carrier network. And we will locate function pools. Within a function pool, we will prepare some service functionality, such as HTTP proxy, load balancer, or cache, or DDoS mitigation, and so on. These are functions we think it is useful for middle-P partner to utilize when they construct the service. Middle-P partner will construct a service by combining a CP function and a function on the edge parts and the function on the public cloud using the SmartPipe. They configure the service using the APIs. Then our orchestrator and SDN controller install necessary function within the network and control the service condition. These kind of architecture is necessary, I think. So to prove the concept, we developed the park environment. I would like to discuss our developed scenario. This is the day one scenario. As shown in the figure, we, Carrier, provide IoT service platform on IPv6 network info. And IoT service provider provide remote monitoring service utilizing a service platform provided by us. And Carrier operate core NPS VPN network infrastructures and access network and data center infrastructures. Within the data center, we have NFVI and container nodes. And utilizing the IPv6 network catalog, we can construct CPE and v6 routers and DHCP servers and DNS so that basic IPv6 communication is established. Then we prepare IoT service catalog for middle-b partners, IoT service providers. Middle-b partner only have to prepare a robot controller application and only have to customize catalog. And after that, our controller install the IoT service plan on top of our infrastructures. In this way, IoT monitoring service is deployed. And robot can perform name resolutions and IPv6 authentication. And IPv6 tunnel is established. And robot monitoring service can be launched. We also developed day two scenarios. This is the managed IoT security service. In this scenario, let's see the sequence. Assume the robot controller is hacked by the attacker, then malicious communication starts. Because we have installed the pro-broad routers and it has a white-list-based filter, they notice that malicious communication occurs. So router notice system to the fluent D. And the last alert detect suspicious flow. So last alert invoke the workflow engine stackstone. And stackstone launched the second workflow. Stackstone set up mirroring parts and launched the security analyzers. Then flow mirror to the security analyzer. After security analyzer performed the detailed security analysis, they detect the specific attacks. So security analyzer noticed this by CISLON. Again, stackstone launched the new workflow and set up a steering pass and launched the cleansers. So attack flow is there to the cleansers and cleanser to eliminate the attack flow. In this way, we can eliminate the attack flow. This slide shows the poke environment. Utilizing these kind of applications, including OpenShift, we can easily develop the poke environment. And this slide shows the screen image. Utilizing the workflow engine, we can modify the service control mechanism. And performing the poke, we found that the container is very strong to realize a carrier-age pass. But we found that we need a little bit effort to customize. Because as for carrier service, we want to identify user by VLAN. And we want to utilize IPv6. But current OpenShift do not support these features at this moment. So we deployed IoT gateway outside the OpenShift to accommodate VLAN and v6, v4 convergence. And because we want to accommodate transport application, which need multiple interface on top of the container, so we deployed Docker container to deploy DPI and probe. Conclusion. I would like to propose a carrier-age pass for cloud native service. I think combining carrier service and cloud service via carrier-age pass, we can expand service capability and we can produce multiple attractive service. We'd like to get feedback from you and from middle-end service providers. That's all. Thank you. This is the contact information. We created a mailing list, b2b2x at letter.com. If you have any question or you are interested in the carrier-age pass, please contact this mailing list.