 Not meeting its promise and being very complex and fragmented. But here i think a lot of the service providers are really Looking to using cloud native principles to be significantly More agile to leverage a lot of the technology that's happening In the broader sort of technology space and to get into Cicd type principles. And i should say that means that they're moving to kubernetes. Now a few years ago i still would have gotten arguments. People would have said oh 5g is supposed to be cloud native but That doesn't mean kubernetes necessarily. Yeah it does. So i think people here probably recognize that kubernetes has Become the de facto orchestrator for containerized Workloads and it is really key to anything you're doing that is Cloud native and so i'm going to speak specifically about the Networking side of this. So again these service providers are rushing pelmel into Kubernetes and it is interesting that when i get on to calls Generally with the customers i find that there are a couple People who really understand 5g rarely anyone who understands Kubernetes or i'm in an alternate reality call where there Are people who just understand kubernetes and nothing about 5g and getting them both on the same call is virtually Impossible and they don't really understand each other. And one of the key issues is that kubernetes was not Created for this for these workloads. Right kubernetes really came of age with enterprise type Workloads and web type workloads and so now we're trying to Apply telco workloads to kubernetes and there are Some basic fundamental gaps and i'll tell you it's Interesting we had one of the original guys from the Original team that made kubernetes come into f5 about Four years ago five years ago and we were just peppering him With questions about networking and kubernetes and he Finally held up his hands and he said look you know the guys Who are programming this were programmers they weren't Networkers right and if you think about it really the One of the key things kubernetes does is obfuscate Networking which is a fundamental issue when your Business is networking right so there are a couple of Ways in which kubernetes and it also evolved and a lot of The people pushing it were public cloud providers where There's a certain assumption about the the infrastructure Around it so we identified three major areas where There are gaps one is kubernetes fitting into The sort of broader telco network and here the Thing is kubernetes is really focused on orchestrating What's happening within the cluster and that's really what It's good at is what's within its cluster the problem is the Telco networks are extremely complex right you Have a lot of separated vlands and verfs and dmz's and Firewalls all throughout them and just the ip engineering And anybody who's worked there knows what i mean when i Say ip engineering just the ip engineering to get an ip Address for a new network function can take weeks just Getting the tickets through to get through firewalls can Take weeks right these are very complex networks and And dropping a cluster in there that has different Network functions is a complex effort and so you Need to be able to integrate with the routing infrastructure With the different kinds of networks etc Secondly again when the 5g stuff even service Based architecture was being outlined it wasn't Dictated that it be in kubernetes right and kubernetes Wasn't designed for it so the the concepts in kubernetes Do not necessarily directly match to the concepts That were defined by 3gpp and so forth For example one of the most obvious ones is protocols Right so and i'm going to set aside the service Based interfaces for a second but you know diameter Is just not a friend in kubernetes right in kubernetes The whole concept of ingress which which is you know Exposing services to the outside world right isn't Http function period end of conversation there are You know and and so anything that falls outside of that Sort of normal paradigm is problematic also You end up working a lot in terms of services things That are being exposed to the outside world which are Essentially just saying i'm going to expose this End point for you to come reach me but the Thing is that network functions are not just things You reach out and get to there are also things that Call out right and so one of the huge things is The sort of bastard stepchild egress there is A major problem with egress and this ties into both of These last two i have a great example if you have A packet floating around in your network you want to Know what network function it came from because if it Came from your amf you want to have firewall rules To allow it to reach your n network if it came from Your policy function you do not right and so you You need to have a more complex a more specialized Egress function and tying ingress and egress Together is extremely important and right now They are just absolutely totally different network paths Additionally you have a lot of sort of non-3GPP Interfaces or even in 3GPP like even in 5G You have sctp and sctp is a layer four protocol But it's just not supported in most kubernetes environments And when i say kubernetes i want to make it clear I'm not just talking about like literal vanilla kubernetes I'm talking more generally in terms of kubernetes And the patterns and the general tools that people use Plus many of those tools are not really what i would call Carrier grade in terms of availability, down time And you know scale And then third is about security So kubernetes has a fair bit of security Again within the cluster Kubernetes feature really focuses a lot on what happens Within the cluster But in terms of how that integrates with the broader Security policies of the service provider That's just the they don't want me talking about this stuff This is too secret here But in terms of integrating with the broader network It's just not there And so things like again integrating with the firewall policies In the broader network Having different kinds of firewall and DDoS protection Etc So if kubernetes is not really fit for purpose out of the box If vanilla kubernetes and the sort of normal tools that people use Are not really appropriate for these telco use cases What do we do? We have really two options We can extend kubernetes And kubernetes is built to be extensible The big, big part of kubernetes is that it is extensible Or you can just break all the patterns and burn it to the ground And I don't have to tell too many people here That there's a lot of burning going on out there at the moment People are trying desperately to get their CNFs running They're trying desperately to get their cores running They're trying desperately to get stuff going And they're cutting corners and they're doing things That are going to cause some problems down the line And we are in danger of having another NFV on our hands Where everything is so bespoke That you can't really translate it from one place to another So just to talk about a couple of the patterns One of the ones that really drives me a little nuts personally Is every time I hear the word multis Now, multis in and of itself is not a bad word, right? Multis is a meta-cni that allows you to run more than one cni And that is extremely helpful, it's very powerful But in this context, it's always, oh, the CNF vendors All want to use multis, they want multis interfaces, they want multis And it becomes a code word for wanting to get around Kubernetes networking Basically, the CNFs are saying, look, I need direct network access to the outside world I don't want Kubernetes involved in it at all, right? I want direct network access, often SRIOV access And I'm going to manage how the IP addresses are presented I'm going to manage how it connects to the routers I'm going to manage all of that The problem is that it then spills what you're essentially doing Is you're taking Kubernetes, which is this thing that is highly dynamic And that is designed to orchestrate a highly dynamic environment And you're spilling out all of the complexity from inside your cluster to the outside your cluster And it means that the owners in the service providers now have to worry about all of this stuff out here How do they deal with the fact that these containers could drop at any time And so these IP addresses could come and go at any time, right? They have to manage all of this complexity that is now spilled out into the network And that causes obviously operational complexity And it makes IP address management very complex And also the interconnection between network functions very complex Security complexity, because now each of those CNFs is directly exposed to the outside There's nothing to intermediate there, right? And then networking complexity, because you're really exposing a lot of the networking complexity And you need to have these, suddenly this CNF needs to be aware of like your verbs and your VLANs and all of this stuff, right? So we see this with every single CNF vendor that I can think of When we first started working, we're in production right now with a large tier one carrier And literally they decided to build basically a horizontal stack where they created the platform and they invited CNFs in Every single CNF vendor they had asked for multis, and multis was this Another thing we see a lot of is, and I'll tell you, well I'll get to that in a second Another thing we see a lot of is creating separate clusters per CNF Oh okay, you know I see a packet, all I know is it came from that cluster over there That cluster's all AMFs, I'll just let it through, freaking nuts Like they're literally creating separate clusters for each CNF And it increases the complexity and the capex and the operational overhead And it duplicates all of the build stuff that you have to do instead of having one large or larger clusters, right? And yet we see this all the time, and the thing is that both with this and with the last one A lot of smaller, particularly tier twos or some tier ones are saying, well we're buying all from one vendor So we're just asking them to take care of that But guess what, these things still spill out and become important for the application owners Because they have to manage things like this outside world They have to manage having a bunch of separate clusters in their network What we, the approach we've taken is we've created a thing called a service proxy And the name came from essentially one of the first RFPs we responded to And it really replaces the built-in service proxy in Kubernetes And it has certain characteristics, and I want to say that these are characteristics that we should be replicating F5 should not be the only ones with these features First of all, we need to use Kubernetes patterns So because a lot of things are not in Kubernetes, we end up extending Kubernetes And that means a lot of custom resource definitions at the moment, and I'll get back to that But at least then it is wholly managed in Kubernetes, and all of the pieces should be inside of Kubernetes The other thing that a lot of the hacks have is external pieces where you have an external load balancer You have an external whatever, and then you have to do all this coordination of the configuration of that thing out there And the thing in here, right? With our product, it's all entirely within Kubernetes and there is no GUI, there is no CLI It's just managed as a part of the infrastructure, which is the way it should be You also need to then have interface with the broader network, and that really means a lot of solid BGP capabilities Egress network address translations so that when packets are coming out, you can say, oh, that's an AMF packet That's an SMF packet Some address translations a number of customers have, for example, there's a large tier one North American carrier who has an entirely IPv6 network But some of the CNFs couldn't support IPv6 yet So they needed IPv4 interfaces and they had a single stack cluster IPv4 cluster And so you had to do translations with every single packet that goes in and out between v4 and v6 There are a lot of those kinds of complexities that you need in order to make these things work You also need to link ingress and egress so that a CNF presents as a single entity What we do, the sort of normal pattern with our software and what our customers are doing is trying to fit a CNF within a namespace And it makes it easier to do part of that translation between the Kubernetes concept of the namespace And the standards concept of the CNF Sometimes they're spread across a couple of namespaces, but we try to have it either one or a small number that are that CNF And then you can treat the traffic that's coming in and out of those namespaces as being part of a CNF Additionally, you need to support a broad number of protocols So everybody says, oh, 5G, it's all service-based interfaces, it's all HTTP too, you're golden Well, you're not 100% golden So first of all, it's not all that, even in 5G there's the NGAP protocol over SCTP It's not all service-based interfaces But also you're going to have very few cores that are pure service-based interfaces that are pure 5G We're seeing diameter all over the place and a lot of GTP and other kinds of protocols But even with the service-based interface, the way it is used is different from normal web interfaces Because what it is doing is replacing diameter And so what you tend to see, just like diameter where you made one connection and you had a ton of traffic for a ton of subscribers going over it You actually see a pretty similar pattern The connection doesn't last for months, but it lasts for a good long time and a lot of traffic goes over it So the patterns are still different in this traffic And then you need to provide a security layer And so for example, we have a layer 4 firewall so that you can take your firewall rules that are in your external firewall and condense them down Like basically collapse your infrastructure and have the firewall at the point where you're going in and out of Kubernetes Because what I'm talking about, the service proxy has nothing to do with what happens inside of Kubernetes It has all to do with the interface to the outside world And that's where you need to have this additional security And then the ability to essentially create a consistent CNF So when highly dynamic things are happening inside of Kubernetes and the pods are going up, the pods are going down You're still presenting something as a single CNF And so for example, diameter is a good example So Kubernetes doesn't have a concept of diameter ingress We have a CRD for diameter ingress and we present an endpoint And even if the pods behind us are going up and down, that endpoint is still the same And it ties together ingress and egress because the egress points, as diameter peers do, will initiate contact to us and we're that interface So again, the service proxy concept solves three things It fits into the broader network because it has a single point of interfacing to that broader network And it has all of the things you need to do that interfacing It supports a wider variety of protocols and it ties together ingress and egress And it's a single point for security because once you have that single pane of glass where the traffic is going in and out Then you can apply security at that point And another thing that some people don't think about that was really key to winning some of the business we've won Is that service providers don't want to have, for example, if there's an update for SSL security Or if they're changing the way they're rotating their certificates or doing something, what you need to integrate with for security They don't want to go to all the different CNFs and all the different CNF groups Even inside a single vendor, they're multiple groups usually, right? But especially if they have multiple vendors, they want to go to one place to say, hey, we want to change the way we're doing this We want to update it rather than waiting for all their vendors to update So what I want to get to at the very end, and I assume I am close to the end Again, I don't think F5 should be the only people doing this One of the problems that I have, again, when we talk to customers There usually are people who understand 3G or 5G, or people who understand Kubernetes And the diagrams they have to start with don't have these functions in them, right? I want to start having those diagrams have these functions And I want to have it be standard So one of the other parts to this is that CNF vendors want to have something that they can test with a generic interface Rather than a bunch of CRDs Because if they are testing with all F5 CRDs, it's a separate set of tests Versus if they're testing with a standard interface Just to dumb that down a little bit A lot of CNF vendors right now are using service type load balancer Well, service type load balancer is relatively limited in what it does And we have a CRD that does quite a bit more with TCP traffic But also with UDP and SETP And there's resistance to it Because they can set up their testing that ends at saying service type load balancer in their service definition And they're done Then it's up to you, whoever the infrastructure provider is To provide a load balancer that is, you know, telco grade That, you know, does everything else that is necessary They've done their testing, right? I want that same level of simplicity But for these places where there are these gaps And so one of the things that we are very actively looking at Is the new gateway API It currently is in early stages So there's a TCP route, an HTTP route I think there's a UDP route and a TLS route But I want to... We're beginning to work with that group I want to start looking at adding in egress Adding in some of these other things We have other groups that are looking at adding in policies around security, etc But I'd like for us to be figuring out a way to do this in a standard way So we're doing sort of a standard Kubernetes API Because otherwise it's going to end up looking like NFV again I guarantee, so... Are there questions? Yes, he's right here You talked about mapping ingress and egress To affect the original problem about shifting IPs as possible up and down One of the reasons why we wanted to define egress is to know What source IP traffic is originating from inside the cluster How would you handle that with your scenario? Yeah, so it's about tying in ingress and egress and IP addresses I will say that there is a super nasty problem If you want to scale There are a lot of places in Kubernetes And again, I'm using Kubernetes in a broad sense In the tools that are used within the Kubernetes umbrella That send traffic to one IP And if you send traffic to one IP Unless you have some way of spreading that traffic out in an HA way You have a point of failure So we use a fair bit of ECMP and stuff To get traffic hashed to multiple instances Like of our service proxy But in terms of tying the ingress IP and the egress IP It's more challenging because you don't want to expose the pod IP The internal pod IP because nothing inside of Kubernetes Should be exposed outside And so what we actually do We have some solutions for UDP For TCP, we actually have a separate egress IPs That are used rather than the ingress service exposure IP Because it's very hard to get a symmetric packet flow The other way So yes, that's tough We have it for UDP and some other, yeah Sorry, okay, so the two questions are Does this sort of eliminate CNI or how does it play with CNI And then do we implement API gateway At the moment, we implement API gateway like more as a demo As nobody else is So there's no point in our fully implementing it And it doesn't have the extra protocols we need So we're in the early stages of that We're sort of road mapping late next year To have full API gateway support We could have it sooner but no customer wants it sooner But I want to see things like egress start moving Into the API gateway before that becomes valuable The second part of how it plays with CNI Is a fascinating thing, right Because the egress, for example, egress is controlled by the CNI It's the thing that sees the packets first on the way out So we are working on what we're calling CNI independence So it doesn't matter what CNI there is And it's a little bit like, you know, Istio has a CNI That basically puts itself in the data flow Before you hit the primary CNI So we're doing something like that So that any egressing traffic Any traffic that's going out of the cluster is sent to us And it's essentially invisible to the other CNI But while we're doing that We have integrations with a couple specific CNIs But that just became burdensome Like if you're running OVN Kubernetes on OpenShift We have an integration with it We have an integration with Calico But what I really want is to not worry I shouldn't have to worry about what the CNI is, right So we're working on the CNI independence I think we have time for one more question Yeah Gargay, can you? So how independent or open source this service proxy is Because like cloud native should be like, you know You use the same concepts and you can run on any cloud It's something, like in the service box There's something special in a cloud That doesn't really help on more problems Yeah, no, I mean, at the moment what we have Is service proxy for Kubernetes, F5 SPK It is a product And there is nobody else who does all of these things You can band aid and hack and script And do a bunch of things to get some of it done But there is no other What I'm saying about going to the gateway API If we can make an API that is standard For solving the standard problems That service providers have Like tying together Ingress and egress And some other things Then if we start using it Other gateways can be valid, you know Alternatives, right Because right now you have to use F5 CRDs Which is a valid way to extend Kubernetes And their CRDs and use all over the freaking place But I would really like to see it be more standard And then each vendor would add their special sauce To try and stand out from the crowd But I think it is extremely important To get to something more standard And use a more standard API than the CRDs Okay, thank you Thank you, thank you Philip Alright, as we get ready for the next We have a panel The next panel is going to be moving towards Environmentally sustainable operations With cloud native tools So if you want to go ahead and start getting ready