 Hello everyone. My name is Elsa Matthew. I'm a software engineer at Intel, working at the intersection of cloud-native technologies and packet processing software libraries. I've been at Intel for about six years now. Previously, I've also worked on Ethernet networking driver software design and development for various OSes. I would also like to introduce the primary author of the talk who could not make it in person but is happy to answer any follow-up questions offline. Manoj is a software architect in data plane networking team in the network and edge group at Intel. He is currently working on building and optimizing cloud-native services using network data plane technologies and has over 20 years industry experience in multiple technology domains. Today's talk is about the work we did integrating cloud-native data plane CNDP, which is a collection of user space libraries for accelerating packet processing for cloud applications with OMEK BES UPF. OMEK BES UPF is the Open Networking Foundation's OMEK UPF project which implements a 4G, 5G user plane function based on the O3GPP control and user plane separation architecture. We will talk about the traffic flow through the UPF pipeline in a Docker environment, the Kubernetes integration with the AFXDP plug-ins for Kubernetes and the Aether in-a-box deployment model. We will also describe platform technologies for GTPU packet steering and how this works along with off-load feature in the NIC to redirect GTPU packets directly to user space via AFXDP. The agenda for today's talk is divided into three sections. First will be on CNDP, where we will start off with the motivation for CNDP, go into a little bit of detail about what CNDP is and the various components that comprise it. We will also talk about the CNDP, Kubernetes deployment model and the AFXDP plug-ins for Kubernetes. In the next section, we will talk about BES, OMEK UPF and how CNDP integrates with each of those projects. Then we will move into the Kubernetes integration, the Aether in-a-box integration and then wrap up with a short demo. Moving into the first section, CNDP, a cloud-native data plane. We've had great success with DPDK and VPP. So why another development kit? Primary reasons are sometimes data planes are not designed to operate in a hybrid, private or public cloud environment. This is because of hardware requirements such as CPU, memory and networking interface. To better align with the cloud-native environments and principles, we need a data plane that doesn't have these restrictions. That is the origin of the CNDP concept. We took the key concepts from DPDK to achieve the performance requirements. CNDP was created to enable cloud-native developers to use AFXDP and other interfaces in a simple way while providing better performance as compared to standard Linux networking interfaces. CNDP does not replace DPDK. DPDK still provides the highest performance for packet processing. DPDK implements user space drivers by passing kernel drivers and that is one of the reasons why it achieves the highest performance for packet processing. DPDK also implements a framework for initializing and setting up platform resources, scanning PCI bus, allocating memory via huge pages, etc. However, in contrast to DPDK, CNDP does not have custom drivers. Instead, it expects the kernel drivers to implement AFXDP, preferably in zero copy mode. Since there are no PCI drivers, there's no PCI bus scanning and we do not require physically contiguous and pin memory. This simplifies deployment for cloud-native applications while gaining the performance benefits provided by AFXDP. CNDP fits the void between flexibility and performance. We target 10x performance over Linux applications using sockets and also clean integration into various cloud tool chains as a requirement. DPDK and VPP are the best solutions for undisputed performance and CNDP fits that void between flexibility and performance. So what is CNDP? It is a collection of user space libraries for accelerating packet processing for cloud applications. It aims to provide better performance than that of standard networking socket interfaces. The IO layer is primarily built on AFXDP. This is an interface that delivers packets straight to the user space by passing the kernel networking stack. CNDP also provides ways to expose metrics and telemetry with examples to deploy network services on Kubernetes. CNDP consumers include cloud network function developers as well as consumers. Developers who create applications based on CNDP will be able to abstract away the low-level IO that CNDP takes care of and focus on their application. The consumers who consume the applications developed by the CNF developer can take advantage of CNDP's deployment models for their applications using Kubernetes. CNDP follows a set of cloud-native principles, functionality, usability, interoperability, portability, performance, observability and security. And here's how. On functionality, CNDP provides a framework that enables cloud-native developers full control of their application. Usability. CNDP enables the developer to create applications by providing APIs that abstract the complexities of the underlying system. Interoperability. Since CNDP is built on top of AFXDP, it's possible to move the application across environment from public to private to hybrid cloud wherever AFXDP is supported. CNDP provides a common API to access network interfaces. With regards to performance, it takes advantage of platform technologies wherever it is available. And if there are no specific hardware technologies, then you can always fall back to the software when acceleration is unavailable. Observability and security is another key aspect that CNDP focuses on. Metrics are exposed via Prometheus agents and security is also first-class citizen and we'll come to that in the next slide. This diagram showcases the various components that comprise CNDP. You have the core libraries, the application libraries. You also have the cloud-native stack, which is IPv4 UDP TCP networking stack designed using the CNDP graph node library. We also have language bindings with go and rust and there are a couple of ansible playbooks that you can use to help with setting up your system. This is a diagram of the CNDP and Kubernetes deployment model, which shows the integration of CNDP with the AFXDP plugins for Kubernetes. This is a device plugin and CNI. The benefits of this integration is two-fold, security and scalability. Security, in order to run the CNDP application in a secure, unprivileged pod, there were two aspects of AFXDP socket creation that needed to be done by the AFXDP device plugin and CNI, that is the loading of the EVPF program on the net dev and updating the AFXSK map with the file descriptor. Scalability. In addition to helping CNDP run in a secure, unprivileged pod, AFXD device plugins help the scalability by supporting sub-functions API. Since AFXDP socket is only associated with a port and QID tuple, a single net dev comprising of multiple sub-functions can be used by multiple pods instead of moving the entire interface into the pod. ETH tool filters are programmed by the CNI and traffic metrics are exported by the sidecar container running Prometheus. Switching gears now, let's move on to the next section, where we'll talk about BESS. What is BESS? BESS is a software switch. It's designed to be extensible and highly performant. It is the first networking switch, software switch designed to support NFV in addition to traditional virtual networking tasks. There are four components of BESS, BESS D, the ports, the modules, and BESS Kettle. BESS ETL is the controller for BESS D and it offers a command line interface allowing an administrator to configure which ports are connected to which modules and so on. BESS D is the daemon as the name suggests and it's the core software switch. Ports are the interfaces where packets may enter or exit BESS D and modules are pieces of code that allow BESS D to inspect and modify the packets. For integration of CNDP with BESS, we started with the CNDP BESS port which enables us to send and receive packets to and from the networking interface using AFXDP. We added support for this along with the existing DPDK port. So that's about BESS. Now what is OMEQPF? OMEQPF is a project under the Open Networking Foundation which is implementing a user plane function. It makes use of the PFCP protocol for the communication between SMF in the case of 5G and the UPF. It's widely used as part of the Aether platform in conjunction with the SDCore mobile core control plane. There are two parts to the UPF in OMEQPF that is the PFCP agent and the data path. The PFCP agent is a go-based implementation and it is used to interact with the mobile core control plane. The PFCP agent implements data path plugins that translate PFCP messages to data path specific configurations. And there are two data path plugins currently in the project. One is the BESS plugin and one is the UP4 plugin. The one that we are interested for our CNDP integration is the BESS plugin. Since we added the CNDP port to BESSD, we integrate with the BESS plugin in OMEQPF. We already talked about BESS. UP4 is an implementation leveraging the P4 programmable switches to realize a hardware-based data path. The combination of PFCP agent and UP4 is referred to as P4-UPF, while BESS-UPF denotes the combination of PFCP and the BESS data path. Support for new data paths can be provided by implementing new plugins. CNDP integration with BESS for UPF provides flexibility in terms of deployment and horizontal scalability. So now that we talked about CNDP, BESS, OMEQPF and how CNDP integrates with them, let's talk about how we tested this. So this is how our development and test setup looks like. We have two systems. System one runs the UPF pipeline and system two runs the traffic generator. The two systems are connected back-to-back with Intel Ethernet controller A10C for QSFP. For generating traffic, we are using DPDK package N with BESS scripts to generate the GTP traffic. These scripts are already a part of the OMEQPF repo and we were reusing that. It simulates both uplink and downlink data traffic from multiple UEs to the app server. In our current setup, we used Intel Ethernet controller A10C for QSFP with DDP profile. That's the dynamic device personalization for telecommunication workloads, GTPU enabled. We used the out-of-g-driver and set TC filter rules on access and core networking interfaces to do GTPU RSS based on the inner UE IP address in the encapsulated GTPU packet. Currently, we are setting the TC filter rules using a batch script. With the TC filter rules, we use application device queues to create queue groups and only GTPU packets are redirected to the required queues. The default XTP program is loaded on the net dev and AFXDP sockets are attached to the required queues which handles the GTPU packets. That is about the test system that we used to test the integration out. Now, let's talk about the Kubernetes integration. When we initially started out, we used the Docker setup script which is in the Omec UPF repo. Once we had that working and had confidence that worked, we moved on to the Kubernetes integration aspect of it. This is how we went about it. The Omec project has a UPF deployment's YAML. We took that and we modified it to use the local CNDP plus best UPF images. The AFXDP plugins for Kubernetes that we talked about in the CNDP deployment model diagram were also deployed with the CNDP best UPF application. The AFXDP CNI was used to create the network attachment definition for the access and the core networking interfaces. That is about the Kubernetes integration aspect. Now, I would also like to introduce Aether in a box and how we integrated the CNDP best UPF with Aether in a box. Aether is ONF's 5G LTE connected edge platform as a service. It's the first open source 5G platform for enabling enterprise digital transformation. It provides mobile connectivity and edge cloud services for distributed enterprise networks as a cloud-managed offering. It's an open source platform optimized for multi-cloud deployments and simultaneous support for wireless connectivity over various spectrum. Aether in a box, on the other hand, it provides an easy way to deploy Aether's SD core and other components and then run basic steps to validate the installation. It can be set up with either a 4G or a 5G SD core. In our case, we did it with the 5G SD core. It can be done with or without the interactive GUI for examining and changing the configuration known as the ROC. If the ROC is not deployed, you can also use a simple tool called SIM app to configure the required state. The values.yaml that was provided to the Helm charts from the Aether in a box repo was modified to use the local images. The UPF mode was set to CNDP and we tested it with the 5G SD core. The Gnode B SIM performs the registration plus the GUI initiated PDU session establishment and sends the user data packets. So now we will see a demo of the Aether in a box integration with CNDP. First, we are doing a make reset 5G test. So in that the Helm chart gets deleted, the release is uninstalled and we wait for all the pods to terminate. If you notice we have a pod for AMF, we have a pod for SMF and UPF 0 and so on. So our changes are in the UPF 0 pod which has the CNDP plus best UPF image. And then when we do a make 5G test, the Helm charts get downloaded and then the simulation starts running. Helm charts are the primary method of installing the SD core resources and Aether in a box provides a great deal of flexibility when it comes to which Helm chart versions you want to install. There are local definition of charts, then latest charts that you can get from the tip of master, also specified versions of charts for deploying a specific Aether release. Aether in a box can be run on bare metal machine or a VM. In this case we are running it on bare metal and we are using Ubuntu 18.04. The data I specified there is a dummy networking interface and it uses Mac VLAN networks called core and access. And the behavior of the UPF is to forward packets between its access to the core interface while at the same time moving and adding GTP encapsulation on the access side. Upstream packets arriving on the access side from the UE have their GTP headers removed and the raw IP packets are forwarded to the core interface. So now in the simulation the GNode BSM is doing the registration plus the UE initiated PDU session establishment and it sends the user data packets. Aether in a box will now print out a lot of logs for showing what exactly is happening at each stage of the simulation. And here we are testing it with five profiles and for each profile like five packets are sent so a total of 25 packets traverse the pipeline. And at the end of the simulation we will see that whether the simulation passed or failed. So now that the simulation is over we can look at the best UPF pipeline which ran while the simulation was happening and that is on the next slide. So this is the best UPF pipeline and it's graph based and you can see the packets traversing the pipeline going through each of the nodes in the graph and the output view mode is set to the total packets received. So that's the demo. To summarize CNDP is a recently open source cloud native solution for accelerating packet processing applications. The integration of CNDP with OMAC UPF highlights some of the achievable improvements with 5G deployments. I have a slide in here which has some useful links to the GitHub repos for each of the projects mentioned in the slide deck as well as links to the blog posts, webinars and technology guide. That's all I had. Thank you.