 Hello, everyone. Good afternoon. Today, my topic is empowering cloud-native networking with ARM ecosystem. And my partner, Han Yu-Ding from China Mobile would also give a talk in this session. Now, I will start this session. Thank you. Firstly, I would like to give out the agenda for today. And firstly, introduction for our today's topic. And the second one would be reference cloud-native networking stack on ARM. The third one would be high-performance container networking interfaces for cloud-native networking on ARM. And the fourth one would be the multi-interface requirement with ISRV support on ARM. The fifth one would be to give out some performance test results for various container interfaces on ARM. And the sixth item would be cloud-native networking use cases and deployment on ARM. And our last one would be our future work. Firstly, I would like to talk about what is cloud-native networking. Cloud-native computing means to use containerized open-source software stack, where each part of the app is packaged into its own container dynamically orchestrated so that each part is actively scheduled and managed to optimize your source utilization. Cloud-native networking is an approach to provide the networking environment for building and running applications that exploit the advantages of the cloud computing delivery model. So high-performance, flexible, and function-rich container networking technology is the key to the success of cloud-native computing. For various CNIs, and emerged today is to support cloud-native networking. CNI is a CN's cloud-native computing foundation project. It consists of specification libraries for writing plugins to configure cloud-native interfaces in Linux containers along with a number of supported plugins. And CNI follows the Kubernetes networking model on native computing with ARM. ARM originally joined the Linux foundation 10 years ago and joined CNCF as a golden member in February 2019 and upgraded to a platinum member on November 2019. And ARM is already a key player in tele-concentric organizations such as Telecon Infra Project, DPTK, FIDL, and the Linux Foundation's Ocarina Project. And ARM provided ARM-news, the foundation for the 5G cloud-to-edge infrastructure. And ARM is actively engaged with the cloud-native software ecosystem to deliver optimized stacks. So DevOps teams can port and run codes and applications on ARM-based solutions. ARM provides a new-world platform to support the cloud-to-edge computing. For example, currently, the ARM N1 platform provides a very high-performance scale, high-performance support for the cloud-to-edge computing. And for the next generation, an N1 platform provides 60% performance uplift from the original A72 ARM cores. And for the next generation, for 2021, the next generation N2 platform, we named it as Perseus and performance with semiconductor technology from 7nm to 5nm. And for the ARM-news N2 platform, we provide a good cloud-to-edge efficiencies. For the cloud data centers, we provide 32 to 192 cores, which has a very high, rather high performance, but with very high core count, high power consumption for cloud CPUs. For the middle core, for the Edge use cases and the 5G use cases, we provide the middle core count with 12 to 36 cores and with a million TDP, million power consumption for the 5G and 2 Edge and something like switches and smart CPUs. For the no-end Edge devices for the 5G or other Edge devices, we provide very low core count, low power for the gateway and route CPUs. So this N-series platform scalability scale views the performance requirements for each use cases. And actually, cloud workloads deserve dedicated costs for the hyper-threading one data center and the threat performance will win the cloud with more and more high-performance cores, high-performance cores, high-performance threads. Now, the enabling friction is cloud native development experience from a various open-source software projects such as from the workloads such as they now would actually give the third one set apart is a reference cloud native networking stack on ARM. In this reference cloud native networking stack, we would use the hardware acceleration from SmartNIC or integrated accelerators with SmartNIC. And from there, we will use the operating system network, we will provide real-time Linux distribution, real-time Linux and SRV and DBTK support. From the orchestration layer, we would provide the Kubernetes X orchestration for the containers and containerized installers and restocker and other things like that. From the networking into software, we would provide VPP, OVS, and a Linux system networking with EBPF support. And for the containerized working, they for nano-calico-syllium with EBPF and continue with VPP with DBTK support and OVN Kubernetes with OVS support, we would also be provided. And at the high level, we would provide the Kubernetes networking abstraction, which provides load balance, policy, high availability, DNS life cycle, external IPs, and securities. And for all the services provided, it would be like something like CNCF service mesh, package management, CI CD, and others. We would like to support 5G network functions, MAC, and other web-based microservices. So, we would like to talk about the core of this cloud-native networking stack is to the supporting network technologies. For layer 2, we would use Linux Ethernet driver bridge, DBTK virtual switching, all things like that. And for layer 3, there would be IP tables. And for layer 3, there will also be overlay tunnel for IP IP VX line and the VHRE and other IP tunnels, other IP-based tunnels. For layer 2 and 3, for the layer 2, 3 technologies, the most important one would be the kernel JIT, which is EBPF, E-BPF, Berkeley Packet Field, enhanced. It should provide programs, maps, registers, hyper-functions, all things like that to support the hyper-performance service-oriented, lower support in the kernel. And for on the layer L7, we would apply the HTTP proxy for the onY-based HTTP proxy. And this is the reference cloud-native networking stack provided on ARM. Now, we would like to talk something about the hyper-performance container networking interfaces for the container networking, cloud-native networking on ARM. Actually, the hyper-performance container interface is available for ARM. There is a lot of support for the hyper-performance container networking interfaces on ARM. And we would like to, here, we would like to give a brief introduction from our IEC reference stack. IEC reference, which is from Ocarina IEC footprint, which is provided and integrated into IEC repo. And I had to give out the links. If you are interested in this one, you could check out this CNI. The first one would be Calico. Calico is a pure IP networking-based and there is high-level network policy management by IP tables. It provides good scalability and support direct and overlay network collection and easy deployment. And the second one would be Silian, which is a very popular networking support, container networking support today, which uses Linux EBPF based on network policy, load balance, and security, which is believed to be with incredible performance and air-series networking between hosts with good scalability, too. And Cloud Continue VVP, which uses FIDL VVP to provide a network collectivity bidding board and which uses native DBTK interface support for physical nick. And for OVN Kubernetes, it is OVN control-based Kubernetes networking solution, rather good performance with OVN integrity, and uses OVN logical switches and routers to collect ports for outside access. For the SRV physical interface support, we provide Mata's with SRV CNIs, which provides direct physical interfaces for PF-VF support for ports, and which is high-performance with direct Linux kernel Ethernet driver or DBTK PMD driver. And it usually co-work with other CNIs, such as FNANL, Calico, by Mata's, and other glue CNIs. And the last one would be the widely used FNANL used, which eases the deployment for simple Kubernetes networking, which uses Linux Network Bridge for port communication and overlay-based communication for in-host access. So if you are interested in all these CNIs, you could find them on ARM, you could find them in the IAC repo in Ocarino. Here, I would like to give a brief introduction about the status of CNIs on ARM platform and our contributions, which is only a very small part of it. Now, I would like to talk something about the multi-interface requirement with our VN smart nick support on ARM. Here is a multiple interface support by SRV CNI. Usually, we provide a legacy FNANL and Calico plug-in for all the ports as a default CNI. And optionally, the user wants to use a high-performance interface in their port. So we provide the SRV CNI plug-in to provide the second interface. And the second interface would be the physical interface provided by the physical nick, which may be a VF or PF of a physical nick. Here, we provide this SRV support to the port with a smart nick, which is from Broadcom. Here is the right part provided by the resource configuration for the VF nick. For example, for the SRV interfaces from Broadcom, from this Broadcom smart nick, we provide the drivers, you know, BNXT, Ethernet driver, and we provide a description about this driver and about these things. So the system will discover all the related virtual functions interfaces here. And if we want to provide the DBTK device drivers, we would like to use the DBTK drivers, which we have our PCI here with a physical nick by Intel. Here is a network attachment, the definition for the resource SRVnet, which would be used in a port description and provide the IPAM support, IPAM for the IP address allocation for the port. For example, if we want to provide SRV interface into a port besides original Laxif Nano or Calico interface, we provide to provide a description, a notation here for the SRVnet1 and we provide the resource request here for ARM.com PS22, here is 2225 SRVnet1, which is one and one resource. So it will provide the one SRV interface into the port with this port. Here is the networking model for containers in a single host by SRVCNI and we provide the SRVCNI by SmartTik PS 225 with a single PF, but with two different VFs and it is collected by different VFs, but in a single host. For the interhost communication with other hosts, we also could use the SRVCNI plugin, which would provide the VF support to the outside world with the SRV interface for the second host. Here you can see this port can communicate with another host with the SRV interface with EMP9S0 to the SmartTik VF0 and to the outside world, to the outside host. So we had done some performance tests for the various container interfaces on ARM platform. And the first one would be here we would provide the performance tests for different CNIs with different back-end pick-end support. For example, we had tested the Calico with IPIP tunnel and the Calico with direct routing and the Pheneno with IPIP tunnel, Pheneno with VXLAN tunnel, VXLAN tunnel and CDM with VXLAN tunnel. For example, we get the results as follows. The initial observation for the TCP performance over different CNIs we got is for example, we get to know the performance gap between CNIs are not so explicit when overlay tunnel is used. And the Calico and Pheneno show a little bit better performance than CDM for most MQs here. And this IPIP with VXLAN overlay tunnel enabled the large MQ size throughput performance is better. And when use direct routing, the throughput performance is not significantly affected by MQ size. Here as the time is limited, I would talk too much about the results. You could check the details by yourself in this slide lately. And a very interesting result here is that we had done some HTTP performance benchmarking for CNIs with various back-ends. Here is the HTTP to host to service HTTP performance which is provided by Kubernetes for different around different CNIs. 10 gigabit collection but with a line gigabit performance result with the increase of the threads we used in the approved test. Here I would like to say something about the service mesh on ARM. Service mesh, service mesh is the communication layer in a microservice setup. All request to and from each other services goes through the mesh. Also long as infrastructure layer in a microservice setup, the service mesh makes communication bidding services reliable and secure. For the control plane, we use the Istio and which is composed of the mixer, which is for policy enforcement and telemetry checks. This is a pilot for service communication policies configuration gallery, control plane configuration, and C-Tedal, C-Dedal, security and credential management. And from the data plane, we use Envoy, which has the HTTP proxy for the high-performance proxy developed in C++. Envoy provides a lot of services such as service security discovery, such as rate limit, such as rate limit, and for the L7 field, L7 field, and advanced load balancing and all things like that. And we provided a Istio use cases for the Istio booking for example on ARM. We had deployed this Istio example on ARM with ARM platform with the product page, rate page reviews on the ports. And Envoy works here, works as a set card to each port, which supports the microservice architecture for the access data flow for the different accesses. Now, Hanyu Ding from China Mobile would talk something about the use case which in China Mobile provided for the ARM platform. As we all know, the cloud-native technical will promote 5G services in China Mobile to be more flexible and convenient. The virtual machine-based orchestration is transformed into container and component-based orchestration. The gradual precipitation of operator capabilities will further enhance business creativity and flexible ability. In recent years, China Mobile has actively invested in the open-source field of edge computing. And China Mobile has been elected to the acrylonal community, technical committee, and have actively led many edge-cloud blueprint projects in acrylonal. So I will introduce some use cases in China Mobile. China Mobile and its partners joined the initial and to be the PTL as the two acrylonal projects, which have successfully released the R3 version of the community. The first is about the cloud-native applications on ARM server in H. This project mainly provides the end-to-end Android application solutions around the Android cloud gaming scenarios. The cloud gaming is one of the key vertical industrial applications of 5G edge computing in China Mobile. It mainly deploys cloud-game rendering and editing at node and some decoding storage, some other functions. The project will provide the Android cloud-native running environment under the ARM architecture for mobile terminal applications at the age. And it also will reduce the difficulty of deployment and the development of applications on the edge cloud and reduce the deployment cost of Android edge cloud. Next, the second project is about the SmartNate. The R3 version of the integrated edge cloud type 5 blueprint released for the first time the OVS offload reference implementation of the blue field network and card based on the SOC architecture. And it also merged into the acrylonal community R3 version. The first release of this project is based on the ARM SOC architecture and OVS DPDT is unloaded to the smart network card, which can enhance the support performance of edge network VPC, reduce the patched loss rate and enhance the management of network card resources to save more computing resources. In the future, the performance of 5G UPF network elements deployed in the edge cloud data center in China Mobile can also be enhanced by realizing the uninstallation function of network card. So, China Mobile is willing to further promote the maturity, business innovation of the cloud native computing through the active exploration and practice in the field of cloud native and edge computing open source. And we are willing to create a win-win cloud native computing ecosystem. Thank you. Okay. And I support the enhancement on our platform. And for example, for CICD performs a future comparison. And we'd like to give some performance optimization of overlay tunnel communication on our platform and the further use cases and deployments about the cloud native networking on our platform. For example, AR VR 5G Mac and service match integration with high-performance AI such as CDM proxy and a father DPDK incorporated container networking usage model and a performance evaluation and father CI for high-performance AI. Here is some references used in today's topic. Use is in today's topic. And thank you all for today for attending this meeting. Bye bye.