 Hey, everyone. Thanks for joining us today. My name's Eric LeJoy. I'm with Red Hat. I'm part of the Telco Vertical within Amia, even though I have customers globally. And I have my friend Appo here with me today from one of our partners. And we're going to be talking about UPF with EPPF. And the first thing I wanted to show you guys is we're short on time for Q&A, but there are two of us. So if you open that link at the very top, you can do live questions. It'll show up on our laptop here. Whatever we can answer in the session will take with us. But we may ask you for your contact so we can follow up with you afterwards. Now from my point of view in the Red Hat side, we are a key platform provider for a lot of our ecosystem. So you've seen comments of OpenShift today at Kubernetes. There's been no comments of OpenShift. Sorry, I'm not allowed to say that word here. But you have seen OpenShift mentioned. And in our OpenShift, we act as a thin red line. So my one minute of fame here today is to show you that if every single one of you guys is a CNF of some sort in a 5G deployment, you could consider a Red Hat being this aisle down the center that you're all running off of. You get your network from there. You get your storage. You get everything you need to run your application and to talk to each other. So with that, I want to hand it over to Appo. And he can talk about what he and Kumakor have been doing on top of OpenShift. Thank you, Eric. Wow. OK. Well, Eric is playing with the Q&A. I can tell you about me. I've been at Kumakor for four years, playing with this baguette core. At Kumakor, we do a general 5G core NFs. I do the UPF. So that's my focus for this talk. I'm an Alton University alumni from Finland, where Kumakor is based from telecom masters. And I did EE as bachelor. So I'm also a bit interested in the hardware part, but not as much, not master's worth. So what is the UPF? It's an NF, three interfaces, three GPP based. There's the N3 for the RAM, N6 for the packet data network, and the control play connection N4 to the SMF in the 5G core. Very similar to the 4G core that we also have, similar in this regard. So when designing the UPF, the goal of this talk is that you can all make your own UPFs after this, is that clearly we can split the interfaces into two. There's the control plane and the user plane. PSUP connection, low packet rate. We can do a normal user plane application for that. Doesn't require anything fancy. So here's the Kumakor UPF user plane pieces. This is a rough split. PFCP server. And then in the 5G core, we have also HTTP server, which is communicating with the other NF discovery functions, so to make it dynamically tunable. For example, this NRF network repository function is using this feature. And then the BPF part in the user land application, we have a BPF loader, which is incorporated in the UPF. And the user plane part of this 3GPP-EPF interface is our design. It translates these 3GPP IEs from the PSUP interface N4 into SDN style flows, which are then executed by the EPF code. There are a couple of benefits for this, but also drawbacks. That's our design. So how the packet classification in the UPF works is we have the classification part and the action part that the UPF does. These PDRs are this packet detection rule IEs that are received from the N4 in the UPF. And then the UPF will use these and pre-sedenses and all kinds of IEs that are included in this to construct or detect a packet or not detect a packet. That's the decision. And then the action part, the most fundamental IE is the FAR forwarding action rule, which is then applied to the detected packet. And how the Kumukor UPF does this is it will construct a cache of all the IEs that are received and then construct a flow. And the flow state is updated with every update. So like in OBS, for example, you have the slow path and the fast path. And the controller can install flows to use the fast path for packets that are used more often. So this is the kind of what happens. You have the IEs from the N4 and then the UPF will construct this flow diagram that is used to forward packets. Update, update actions, and then finally it can update the flow table. And what then happens is it's written to the EPPF map that is accessed by the EPPF application to do the forwarding itself. So now we get to the EPPF application. And this is what happens to an uplink packet, a bit simplified. But in effect, the decisions are the same. There is very little to do because the user space application that uses the N4 and constructs this flow diagram has already done all this work. All the flows are constructed, so the EPPF application only looks up and executes. There's a TID lookup with the SAQFI fields and everything. Packet modification, either there is GDP popping or there isn't GDP popping and then XDP right redirection. XDP is used for maximum packet rate, but you could also use some other hooks for EPPF. But XDP is what we use currently. About the drivers, yeah, XDP has implemented all the kernel backboards that are required for VF, for example. It is much more difficult to do that in mainline kernels. But with Red Hat team, as backing us, it's easy. Vertio also we use that hasn't been an issue. But other ones, there might be some problems if the NIC is not XDP enabled or it's half XDP enabled. That's also a drawback. The other way is the same thing, but there's the UE lookup. Currently, there's IP-based UE detection. But in the future, we are planning to implement Ethernet and also IPv6. So when you deploy this in the cloud, these two cases are what are most common for me. There's the hardware pass-through, super simple. You have a hardware device that is for the UPF, forwarded to a container or a VM. And then you can attach the XDP application to that one. In the virtualization environment, you can create as many interfaces as you like. There's a small performance penalty, but nothing major. What's good about this virtualization approach and why I think it's still more prevalent is that you can do the service change in the same machine more easily. With the hardware approach, you need to do some custom solutions, usually, for chaining services together. About the service change in the future, XDP is being developed with, like everybody has heard about Selium and other BPF projects. Loading multiple BPF programs on one NIC. So you can do stuff that is usually done in a container or a VM environment on a NIC. You can chain, for example, this UPF and MPLS BPF applications on the same interface. And you don't even need a VM for that or a container. You can do it on a NIC. That's great. We are also looking at this, but it's still in the early stages. Also, when the BPF applications become more available, you can chain them together from different vendors. If you want some feature from other vendor, you can use that. And like, for example, our UPF and some other guys tunneling solutions on top of it. And then as we are a small networking company, we can implement stuff you want. So just tell us what to do, and we'll do it. So if there are some questions, there's the mic. I'll try to answer them for you. Maybe this is missing the key. Yes, there's a question. One, two, three. OK, perfect. First of all, thanks for a very good presentation. Thank you. Enjoy it. I have two questions, because I think that UPF in the BPF is a holy grail of the industry. The question is, how you are solving things like low for interception? So that will be the first one. Second one, how you are doing things like header enrichment, all this nasty DPI stuff, et cetera. Because the training itself, so stripping GTP headers and just doing 3, 2, and 6, is pretty much essence of the UPF. But unfortunately, we still have a lot of additional functions like HTTP header injection, or DPI, or low for interception, those kind of things. So just wanted to understand how you are solving it in your software. Thanks. Yeah, I can answer that in a way that how it's done and how I would like to see it done. How it's done now is just chaining VMs or containers together. So the UPF will then forward the traffic to another NF, which will do this. But in the future, how I want to see it done is to do it in XDP, so we can do dynamic flow steering. Another question? No, actually just wanted to say that solving this like this is just shifting the problem to the other box because we are trying to do the UPF outside of the container, just not to use the container to put the same traffic back. So that was just the point. And the question is about the future because maybe you can discuss it offline. Is it really feasible to put such a complex stuff like DPI or hand enrichment or low for interception in a small sort of, because UPF programs tend to be, should be, actually small and packaged. The question is if a huge UPF is something which can really fit into a UPF. And I think that that will be a sort of event discussion, king of event discussion, because I think that this is one of the critical topics in the industry right now. Yeah, if you have any ideas or questions, there is my contact on the top. Send me an email. We can discuss it. But I can shortly address that like, EBPF can be used to split traffic. It can be used to steer traffic to other destinations. Like normally, if you do DPI, you have some kind of mirroring somewhere. So you can use BPF for that. And then you can program, do the exact DPI you want for the BPF applications. And then it might touch only a very small amount of the traffic. So that's doable. But if you want to do something heavy, then, of course, I think you need another environment for that. So you need steering and then the DPI. That's my take. Maybe in the future, it's done differently. And then there's the platform angle, right? So if you have one server at the edge that's acting as a slice for your 5G core, let's say, and you need to do lawful intercept there, are you really going to do that at the cost of CPU cores in that small, limited resource? Or do you already have optical tabs sitting there? So I think, in my view, the future is going to be a mix of how do you do it efficiently? And is it a combination of something that already exists there that you're using for lawful intercept and something that's going to have to be new that's dbpbf-based that can look at any CNI. So if you're running workloads in 10 years from now, it may be a mix of, let's say, five CNIs based on the capabilities of those CNIs. But please, next question. Yeah, thanks for the presentation. Sounded all pretty straightforward and easy. And I'd be interested in all of the stuff that gave Herdex what was really challenging, if there was anything with dbpbf. I think the most fun part was this dbpf verifier. Hello. Hello. Good presentation. Thanks. So a couple of questions. First of all, do you have a preference so that your application works better in a bare metal cloud or a VM-based cloud? Well, absolutely best is, of course, without anything. But if you need something like orchestration or any kind of dynamic scaling, then you need something to do that. And then you just do it. Maybe that's not the question. I was thinking that from the application perspective, if I had a bare metal deployment of Kubernetes, that would you say that's the best performing method to do that for my apps? For XDP, it's the NIC that matters, the NIC driver. At least vendors like Intel, Melanox, what you have. Those guys have implemented XDP in the driver, so that's fine. You can do VFs with SIOV, so that's cool. Last question. Do you do other components, apart from UPF, other 5G core? I'm focused on the user plan, so I don't do this control stuff that much. I don't know about the, we have other guys in our company who do the control plan. AMF, SMF. AMF, SMF, PCF, NRF, what do you have? Like those acronyms, you can keep listing them. Thanks. So, Gergay, long time I've seen that name in the past, he has a question, how do we deploy an EBBF app on top of OpenShift? So today it's a little bit of a workaround, and hopefully eventually we'll have, I would say, a smoother way of doing it with the kernel. But today, I believe you were directly on the worker nodes doing. Yeah, but currently in OpenShift, in OVN, this CNI thing, as long as you're using VF as the virtual SNIC, you can do it. There used to be this limitation in the Linux that you need to mount an XDP application on both ends of VF, but that is no longer the case if you're running the new patch, which has been backported for Red Hat, some version. I don't remember exactly. But we can get back to you if you want to know that. Thanks, guys. And it's great to be traveling again and seeing people face to face.