 And welcome again to yet another OpenShift Commons gathering. And today we're doing this session at an ACPAC-friendly time for once. And I'm really pleased to have Clyde Sugiyama from our Red Hat Technology Office, who is going to talk today about multi-access edge computing and OpenShift. And we're going to go for about 40 minutes of talking. If you have questions, please ask them in the chat. And then there'll be some live Q&A at the end. But this is a new topic for most of us, so I'm really pleased that you're going to get Clyde's introduction to it. And then hopefully you'll join us again at the OpenShift Commons gathering, where some of the folks from NTT are going to be joining us as well with Clyde to talk about this in more depth. So Clyde, please welcome and please take your time. Thank you for having me, Diane. Hello, everybody. My name is Clyde Sugiyama. I'm a Senior Principal Technologist at Red Hat. I'm now working, I'm now driving the several projects for the OpenSAC-NAV Edge Computing and Edge Pass with OpenShift Container Platform or Cloud Suite. We have a plan for the joint session with NTT Network Technology Laboratory at the OpenShift Commons Gathering in Austin to talk about Tech Edge Pass strategy for cloud-native service in their Edge Computing infrastructure. Prior for that event, I will share today with you about the design for Tech Edge Pass in NAB Edge Computing infrastructure. So in this session, I'll cover the concept of the Edge Computing and the idea of how to adapt OpenShift Edge Pass in Tech Edge infrastructure. And lastly, I will share with you Pocuse case, including NTT Edge Pass Poc, that has the capability to provide platform service for industrial IoT, such as IoT robotics and connected car, VTX, on which running in the Edge Computing infrastructure. With that, let me start. There are several activities relating to the Edge Computing development. In early 2014, NTT R&D announced Edge Computing Technology concept, in which can reduce cloud application left for time in 10 milliseconds by locating the Edge server at telephage node. Cisco is also spending many times to develop the whole computing since 2013 for Internet of Everything, as you know. This also covers Edge Computing. They established OpenPhone Consortium in 2015. Also, its European Telecommunications Standard Institute launched MEC, Mac Industry Specific Group. Initially, they called the mobile Edge Computing, and they changed the name to the March Edge Computing. The spec is relating to the mobile Edge infrastructure. Mac aims to place the compute and storage resources like NAB in the 4G radio access network to improve the delivery of the content and application to end users. And this summer, Automotive Edge Computing Consortium has been established. This is a new sign. We have to aware of telecom industry and other industry have to work together for Edge Computing technology innovation. This consortium will focus on increasing the network capacity to accommodate automotive big data in the reasonable fashion between the vehicle and the Edge computing infrastructure and more efficient network design. In most of the case today, IoT Edge device, Automotive Edge device, mobile user equipment, and many other device users have to send the data to cloud or big data center and get response from them. IT workloads are running in the cloud data center, which is far from the user side. In most of the case, when we are facing two scalable issues to handle the hundreds of thousands of devices and when it comes to execute action in real time, telecom Edge Computing architecture can slow the challenge of latency, scalability, and security. We put Edge Computing system in telecom central office between customer device side and the data center side. Edge Computing system can reduce latency and the bandwidth and provide data cache, real-time complex event processing, and data preprocessing in the event that are coming out of the device. Edge Computing system can drive the communication between the device and the back end system. There are many use cases, such as industrial IoT, V2X, video analytics, and the CDN, AR, VR. It's also possible to integrate some of enterprise cloud service at telecom Edge node. Edge Computing system is structured like other data centers technology, and they have some of the same requirements. They need runtime environment, containerization, compute power and virtualized storage, and lightweight messaging technology. They need business rule processing and data analytics to process the huge volume of the data in which they are being generated by the device themselves. In addition to the network service entity in OpenSack NIV, telecom carriers will be able to provide a common service entity and to allow the user to run the real-time application service in OpenShift container platform at the telecom central office nearest to the customer side. So primary target location to run the Edge Computing is telecom central office. The telecom central office typically use multiple access technology to serve diverse customer in residential, enterprise, and mobile market segment. For residential Edge service case, customer optical access line are created at OLT, Optical Line Terminal, and carried IP traffic to the BNG, Global Network Gateway, to manage each customer's IP session. Edge Computing server can be placed behind the BNG. For mobile Edge service case, Edge Computing server running the mobile Edge platform can be placed behind the BBU, Baseband Unit. As you might know, there is a project called the CORD or VirtualCO. To virtualize the network function and to disaggregate network access technology in central office, both CORD and VirtualCO can help to save the colocation space in central office and can add additional value-added service, such as Edge Computing service, into the central office. The colocation space for the Edge Computing service in telecom central office is not unlimited like a big data center. It's depending on the location. These are the sample pictures of the telecom node that's posting at the website, denwakeok.jp. Primary target to develop the Edge Computing service is a typical central office. It is a small data center size in most of the case. You can find more than 2,000 central office in the website. Many central office has been operated by the fixed network operator who have provided fixed telephone service from long ago. Some of them still have many central office nearest to the customer side. As for mobile network operators, their primary target of Edge Computing service will be in the urban area, especially for the 5G mobile broadband service. This picture of the tower is a Dockon base station in Shinjuku, Tokyo. It's a nice building. There are many radio network controllers and baseband units in that building. Edge Computing server can be installed in Doa. But in rural area, there are many outdoor implementation in current 4G LTE infrastructure. Outdoor implementation is out of scope at this stage because no space to install Edge Computing code server. At least indoor implementation is needed for code server implementation. This slide summarizes an example for the deployment option of the Edge Computing system in teleco-central office. Small port system, mini-port system, and micro-port system. Small port system needs 5 to 10 rack space to install. Mini-port system needs 1 or 2 rack space to install. And micro-port system needs half rack space to install. In case of a micro-port system, hyper-converse node that's integrating the compute node and the storage node will be needed. In addition to the colocation space, distance from customer side is also key point that Edge Computing can deploy. This slide illustrates a hierarchical location of the central office based on the distance from the customer side. Historically, fixed network operators' central office have been located based on the consulate on the physical copper loop length, which generally means that the number of locations is determined primarily by the topology consulate, by the maximum distance, typically 2 or 4 kilometer depending on the operator. And the size of the central office is determined primarily by the type of the area service and the associated population density, urban, or suburban, or rural. Basically, it's tiered structure from access to the core. This way, the CEO aggregates the customer access line and the consulate CEO aggregates the distributed CEO. Distance from customer side is about less than 20 kilometers. Because latency in fiber optics cable is about 1 micro-second per kilometer, so it will be less than 0.2 millisecond round trip latency in 20 kilometer optical fiber network. For mobile network, it is depending on the deployment architecture of RAM, radio access network. These future runs are created by the EPC mobile packet core in main CEO that is far from RAH, remote radio head. Centralized RAM for 4G LTE advanced pro over 5G in future is aggregated by the BBU hostling site. Distance from the RAH and the BBU baseband unit, the CPRI front form is about 20 kilometer in case of the 4G LTE. And the BBU hostling sites are aggregated by the main CEO. So comparing to the fixed network operator, mobile network operators don't have many central office now because of the cost efficiency for wireless access. Instead, mobile network operator have many base station outdoor. This slide shows a candidate for the edge computing server co-location site. Mobile network operators can install edge computing server at the BBU hostling sites. And MVNO site is also available to run edge computing service. But the BBU hostling site is nearest to the customer and the device via 4G wireless. This network operators also can install the edge computing server at the edge aggregation CEO site. So each candidate central office needs space for edge computing service. As I mentioned before, there is a virtual CEO project use case in the open daylight. Virtual CEO architecture can virtualize edge function on top of OpenSec NAV and manage edge traffic with OpenDirect SCN controller. We demonstrated a virtual CEO residential service at OP NAV summit this year. In the demo, we integrate virtual VNG, virtual virtual and virtual firewall on top of the Red Hat OpenSec NAV platform and manage edge traffic by the OpenDirect SCN controller. We can use this virtual CEO software stack to integrate edge computing function. Here is the idea to integrate edge computing function in virtual CEO software stack, which is based on the edge NAV reference architecture. We can run OpenShift container platform on top of OpenSec NAV and containerize edge computing service in each port in OpenShift North. In this essential access service, user's IP session is managed by VBNG and transfer specific user traffic to OpenShift North. VNG's controller of OpenShift needs enhancement to meet each TELCO-H environment. TELCO-H environment is not the same as the internet data sender environment, which placing the load balancer in front of OpenShift. But DevOps environment can keep the same process to develop the real-time containerized application in port and provide real-time service to specific user or device through OpenShift container platform at TELCO-H. There are many open source that you can use to develop the container application and run in the edge pass environment on top of OpenSec NAV edge platform for residential access. For example, the EGYPS IoT project has a kind of edge computing project, such as Kura, Kapua, Forno, EGYPS Kapua provides a service required to manage IoT gateway and the smart edge device through a Kura integration framework. EGYPS Forno provides a remote service interface for connecting a large number of IoT devices to back end and interacting with them in a uniform way regardless of the device communication protocol. Fixed network operators have many opportunities to provide edge computing service running in the OpenShift container platform on top of the OpenSec NAV now. As for the edge computing architecture in mobile infrastructure, its MacISG is working on. The spec is still not finalized, but there are many you might know online. This slide shows the edge Mac different draft reference architecture for development of the Mac in NAV environment. The major component in the Mac architecture are mobile edge platform, mobile edge orchestrator, mobile edge platform manager. And mobile edge application and virtual infrastructure manager, which is OpenSec controller. Mobile edge system consists of a set of Mac server and the associated management entity. The Mac server is a logical entity that contains a mobile edge platform and the NAV infrastructure on which the mobile edge application run. The mobile edge platform contains a set of baseline functionality that enabled mobile edge application to run on the particular server, as well as to discover and provide mobile edge service through the service registry. The mobile edge platform is also responsible for enforcing the traffic rule to transfer the data packet to the mobile edge application, as well as to maintain the necessary DNS to discover the mobile edge application. Mobile edge application run on the Mac server as a virtual machine and designed to provide the mobile edge service. Mobile edge computing is designed to provide a multitenant hosting environment for edge application. The hosting environment consists of hardware resource and virtualization infrastructure, which is OpenSec NAV and associated management service for Mac application. There's no spec for edge pass in Mac architecture yet, but we can run OpenShift container platform on OpenSec NAV for containerize some of the Mac application. Mobile edge platform sets policy and configuration rule for forwarding user-plan traffic to Mac application by traffic off-road function. It also can provide radio network information service and other real-time context information to authorize Mac application. OpenShift ingress control needs to enhance for interworking the mobile edge platform. In terms of the logical network architecture, edge-specified Mac can be a part of ENOD-B in run radio access network or can be run on the external edge computing server like this slide. Mobile edge platform on the OpenSec NAV, in which Mac server can deploy in between run and the APC mobile core on the S1 interface. The user-plan over the S1 interface is GTPU base. Usually, without the Mac server, user's IP application traffic is carried through the GTPU tunnel. And the user traffic can be reached to the GI run or MVNO side. So OpenShift container platform can receive the user application traffic at the GI run or MVNO. This is a current feasible solution. But the GI run and MVNO is 4G mobile in more GI run and MVNO in the 4G mobile, not the real-time service. Because they are far from the radio access network, in most of the case. So what we are trying is in this diagram for edge computing service in 4G radio access network, Mac server should be deployed inline. And mobile edge platform should be transparent to the GTP and run the traffic off-road function without impacting the mobile core network. Once mobile edge service has been applied, this termination traffic pattern is an edge path model that OpenShift container platform can be run in the Mac server. As long as OpenShift can receive the specific user's application traffic controlled by the mobile edge platform in Mac server, OpenShift can handle the user's application within each isolated user network and can provide real-time response at the Mac server to a specific user or device. Usually, telecom carrier cannot modify the user's traffic in transit to telecom domain due to telecom regulation issue. So edge termination traffic pattern is right model for telecom edge path because user traffic already terminates the user's enterprise domain in this case. So we can run the virtual edge platform in front of OpenShift node on top of OpenSack NMV platform in each telecom access environment. This slide illustrates based on the edge reference architecture. And we can put VNF in front of OpenShift container platform. The challenge is OpenShift ingress controller enhancement. We have several options for ingress controller, such as HAProxy, NGX, F5, and so on. But all of them don't cover the interworking between edge platform and OpenShift service yet. Because current ingress solution is mainly for the public cloud. Telecom edge infrastructure are basically private access point. For edge computing service in 4G mobile access network, we need interworking with mobile edge platform to handle mobile session and specific data from device for specific edge computing service. For edge computing service in the residential access network, we need interworking with VVNG to handle the specific IP session from device for specific edge computing service. Another challenge to develop the telecom edge path in is multi-site deployment. As I mentioned before, telecom central office is hierarchical located in each area. You can imagine from telephone number assignment, local number, area number, and country number. It's hierarchical. Telecom carrier needs deployed distributed OpenSqt NAV in each area, like a two-tier multi-site model in this area. On top of the distributed OpenSqt NAV infrastructure, they can build Cloud OpenShift platform like a single data center across multiple central office in each regional area. OpenSqt NAV distributed compute deployment will be ready in near future, since self storage for multi-site deployment is ready. But some of the telecom carrier don't need to wait for the distributed OpenSqt NAV feature. If they can design optical fiber transport network between two CEO sites within minimum latency, this is dependent on the optical fiber network resource. Basically, if you can ask an optical fiber transport engineer to keep five millisecond latency between two CEO sites, they will design fiber route if they have an optical fiber network resource. Some telecom carriers have many optical fiber transport network resource in country, so they can design a computing infrastructure like a single data center. The key point is to build infrastructure like a single data center. Optical fiber transport engineering is important to build telecom edge cloud computing infrastructure. On top of the optical fiber network infrastructure, telecom carrier can build multi-site self storage and OpenSqt NAV. Inter-data center SDN solution is also needed. Our SDN partners like Cisco, Juniper, and Nuage have an inter-data center SDN solution. On top of OpenSqt NAV, OpenSqt's personal storage can use the OpenSqt Cinder or SafeRVD storage. As for the isolating network between each port in each node per each software project, we have many SDN partner solution with CNI paralleling. Telecom carrier can work with the SDN partner to deploy DevOps awareness network across multiple ports. And that OpenSqt SDN needs to interact with inter-DC SDN solution. Here is a part what we are working with NTT network technology level three. They implemented Red Hat OpenSqt NAV multi-site and RAND OpenSqt. Actually, it's cloud suite product that is included in OpenSqt and OpenSqt. And they developed IoT robotics controller prototype with OpenSqt and RAND IoT robotics container application import to control robot remotely. Also, they are adding the common service entity such as IoT gateway, DNS, and authentication service into OpenSqt container platform. They are now working on the Ingress design. IoT device traffic is agreed by the Virtual CPE in front of OpenSqt. Virtual CPE covers many access functions, including BNG. As for the SDN controller, they are now just using Ansible. It's not full automation with SDN controller because they want to check each network behavior at this stage so that they can design. Between two sites, they built an EVVM and managed network configuration by Ansible so far. I believe that they will start to work with SM vendor once new architecture for telecohpast model is finalized. So for further detail, we have a session in OpenSqt Commons Gathering. You can discuss with entity network technology laboratory at Austin. Please join the OpenSqt Commons Gathering. Thank you, Hyde. That's a great segue to do this. And also, there will be many more Pog opportunity to run IoT application in telecohpast infrastructure. In Ingress IoT project, I found that there are many several Pogs should be done in the edge computing infrastructure in sheet of cloud infrastructure for real-time service. I'm now starting to work with our IoT team and try to invite some party for starting cross-industry Pog on top of edge computing infrastructure that some of telecom carrier are working with us. The through that cross-industry Pog, telecom carrier will be able to get no real requirement from IoT industry. And the last thing I'd like to mention is that about automotive industry activity. This is a new sign that automotive industry is wanted telecohpast computing infrastructure. Toyota presented the 5G requirement for B2X at Empress Japan last year. In the B2X, currently, the vehicle on-board IoT gateway device sends the car's device data to the Toyota big data center through the 4G mobile coordinate. It is estimated that data volume between vehicle and data center will reach 10 hexabyte per month around 2025. This will trigger the need for the new architecture of the network and the computing infrastructure to support the distributed resource and topology-aware edge computing storage capacity. They are exploring the design for geo-distributed deployment in 5G mobile infrastructure. OK. This is a radar enemy software stack. In summary, telecom carrier can provide network resource and storage resource in addition to the edge computing to run OpenShift for edge parts. BNG as a virtual network function also can place in front of OpenShift for residential access. And the mobile edge platform as a virtual network function also can place in front of OpenShift for mobile access. Also, service provider can provide a common service entity to many third party who deploy the application entity in OpenShift container platform. So key takes away from this session or two things. We better to keep flexible architecture so that we can implement edge parts in each telecom access infrastructure by endless enhancement to interact with each edge platform. By using OpenSack NAB and OpenShift container platform, we can place mobile edge platform in front of OpenShift for mobile edge network infrastructure. And for the residential access network, we can place BNG function in front of OpenShift on top of OpenSack NAB platform. Through the collaboration activity with telecom carriers R&D, I'm feeling that the sum of telecom carrier transforming to new normal. They are going to be an ICT platform provider for new service provider who provide new edge service to users, machine and cars and everything. We have a plan for the joint session with the entity network technology laboratory at OpenShift Common Gas Line in Austin. We can discuss detail of how tech should provide the telecom edge parts on top of edge computing infrastructure at the event. I think that's all in this session. Thank you for listening. Awesome. I thank you very much for this introduction because I think this has set the stage really nicely for folks who are in the telco and the automotive interface. But also for people who are going to be in the audience for the OpenShift Common's gathering in Austin this coming December 5th. And we're really looking forward to having an ICT presentation on their great use case. I think that one of the things that I like about the gatherings is that it opens up lots of conversations. Especially the stuff you're talking about around the automotive industry because I know a large number of the big automotive BMW and Volvo and other folks are already using OpenShift extensively. So it'll be interesting to get their feedback on how this works and might work and play into what they're thinking around what's coming down in the next 10, 5, or maybe even just two years down the path. You've got all the IoT and automotive and huge networks that we're going to have to be working with and supporting. So thank you very much for this talk. If people want to get a hold of your hide, perhaps you could put your very first slide on. I think it had your contact information on it. Oh, yeah. I think it had contact. There we go. There we go. Live interactive demo. That's the demo part for here. So if you want to get a hold of hide at Red Hat and maybe connect with folks at NTT who have been listening to this and want to find out more before December 5th, actually, it takes time to get a hold of it. So hide, your speaker has just at the very end decided to put. Yeah, regarding the demo, actually, NTT and NTT have the demo tool to control the mini robot. It's a nice demo, but it takes time to prepare the demo. We can discuss with NTT and NTT prior for the common gathering. That would be cool. All right. Well, I'm going to say thank you and sign off now. And thank you.