 All right. So this is a telecom news group and this call is being recorded. The meeting and they are posted to YouTube. So, if you're presenting anything or speaking, then it will be presented on public forum. The meetings meet. Monthly. The first Monday. And the time switch. This is the later time 1500 UTC. The next meeting will be at 1100 UTC. Right. The topics are open. If anyone has anything, feel free to add or you can say something now if, if there's an item that you want to discuss. I do think that a lot of the discussions that have been happening in the CNF working group will channel in Slack. There's quite a few of them that we could probably bring into the tag to focus on. The telecom user group is meant to be a place where we can have a more open discussion about any type of concerns or ideas that are related in the telecom domain. And as people are coming in and looking at new technology within, I'd say the CNCF ecosystem. So cloud native ecosystem and community across the board. So that's a bit broader than the CNF working group, which has a more narrow focus. So. If anyone had any of the things, I mean, I see towel and some other people that have been presenting and talking quite a bit in the many areas, but if y'all have any topics that we haven't got to on the CNF working group or it's been kind of to the side and please speak up or we can add them to a future tag as well. So the right now this CNF working group is weekly. And that means it's right after this call. We may do some adjustments so they're not back to back like that going forward. Right now we're looking at having a CNF working group today and it's likely to be next week. There may be a gap with holidays and then starting up again. The. Tug meeting the next top of music group meetings on the 4th of January and we currently plan to have that. That'll be the 1100 UTC. There's a. Elephant developer and testing forum. Where there may be a track. With CNCF telecom related activities. So there could be stuff related to the tech, the CNF working group, potentially the CNF test bed and the CNF test suite will be on that track. We're still trying to work out the details. That's in the February. And. We'll be closing on. We'll be closing on December 13th. So we got a week. If you're going to get something in. Putting that out. And there's been more and more top on topics. So. Please get stuff in there. And. So the. Kubernetes. Cloud native community will. Get more and more engaged directly. All right. Bill, do you have a question? Get more and more engaged directly. All right. Bill, do you want to. Talk about. And the white paper. And I can hand it to you. I'll stop screen sharing for a minute. Sure. So for some reason, my internet connection is slightly unstable today. But in case. Anybody who wasn't aware of the white, the first white paper. That we were working on in the telecom user group has now. Been published. And you can find it in the GitHub repo. I can add a link. The link started in there. So if you want to. Use this to go out and talk about. Cloud native, please feel free. To use this as a reference point. Going forwards. And I think it's a great first step for this group. And I look forward to seeing more white papers. Coming out of this group. And kind of with that, I. I think that's a good transition to Jeffrey salons. To talk about maybe the, the next white paper to come out of this group. So Jeffrey, do you want to take it? Yep. Sure. See here. Can you guys see my browser? Looks good. Okay. Small. Yeah. Okay. Yeah. So first I'm going to throw out a disclaimer. I started this 18 months ago. And in our space, things move pretty fast. So just bear with me here, but. The initial audience for the white paper that bill is just referencing was really. Around like that kind of like. You know, you know, kind of like CTO senior architect level, right? Like there was some discussions in the CNF working group about talking about motivations and like why cloud native. And I think that that first white paper was kind of trying to achieve some of that of just a generic. Why would we, you know, do cloud native in the first place? And to me, like the general motivation, I think. You know, I think that if we kind of take off the niche use case hat for a second and think about like what our enterprise, you know, web services look like for how we present our stuff to customers, how we run a lot of our own internal IT clouds in telco and stuff. I would say the vast majority of us. I would be willing to bet. I know charter is. Does a ton of cloud native. You know, you know, our online marketplace, things like that, it runs in Cates, it runs in the cloud, et cetera. So this was supposed to kind of be like some of the early discussions I had with other providers, like Mr. Bernier down here. Some of the vendors like gear gay on the call, et cetera, talking about some of the drivers and challenges for like, why would we do cloud native for the actual, you know, the cloud native, you know, the cloud native, you know, the cloud native, you know, the cloud native, you know, the cloud native, you know, the cloud native containers or, you know, whatever. Why would we do the packet core the way we're starting to move? I mean, the packet core standards themselves are now actually starting to dictate cloud native, you know, and cloud centric approaches. So like this is creeping into our standards bodies as well as we continue to kind of permeate into the cloud native. So, you know, this is a good idea. Like I said, it's pretty old, but, um, you know, the long and short is, is not only do a lot of us think this is a good idea. It's being forced on us, right? Um, vendors also have to do their economy of scale discussions internally. And there's no way that they're going to be able to support and staff, you know, a future where they offer a physical packet core, a virtual packet core and a cloud native packet core, right? Like they need to focus their efforts. You know, you know, you know, the direction we're going charter, you need to get ready for it. Right. I have these discussions with a lot of the big players pretty regularly. So then really getting down into what pieces should be cloud native. Like, why are we doing this and stuff is kind of like my attempt here. Looking at some of the challenges we had with, um, the way we're going to do this is, um, you know, we're going to do a lot of things that are not carrying forward. I use some of this business speak. I try to do a little bit of research. Um, Oh, you know, the whole like crossing the chasm and early majority thing, right? Like. At this point, Kate's has crossed the chasm. It's used in a lot of places. It's widely adopted. It's kind of like. It's really not even like that. She can you more like, it's kind of like a safe technology for hosting and orchestrating containers, which that's what the market people, you know, say, those of us that are running like real world case workloads, we can see our tears, but, um, for the most part, though, it's, it's baked into a lot of stuff. One thing though, I don't think that, um, this whole diagram that, you know, shows the little curve going up and across. I know your technology is now kind of like widely adopted is, um, this ecosystem continues to grow. And like, you're constantly in like this weird spot where like more and more stuff is being added to this extensible platform. And you'll have while Kate's itself is established early adopters pulling different technology buckets into it. Like lots and lots of companies running Cates, not every company that runs Cates is using a service mesh. Um, a lot of the tell optimizations that tells covered around like the topology manager, some of the more fancy C and I multiplexers like multistanum, et cetera, like those are out there. And we have early adopters that are consuming them and doing cool stuff with them. But, um, there's a lot of people who are still scared of it. So like one of the things I'm hoping this group and the CNF working group provides us with the best practices and stuff is, um, for those that are in the more risk adverse spaces, um, how do we know where something is within the ecosystem as a whole, as far as like it's adoption, it's stableness. Um, you know, like technically v six, like that's a huge, huge thing for all of us service providers is still wet and beta status upstream. Um, it's slowly, but surely every case release getting more and more mature, but like, you know, sometimes even just the terminology and how CNCF like designates whether something's an alpha beta or GA, um, is different than like the rest of us. Like, and so like, if one of my executives here is beta, it doesn't matter if it's been in beta for seven years and it's super stable. It's, you know, we don't put beta code into our production network, right? So, um, trying to change those perceptions. Um, so yeah, some of the, like challenges, um, I mean, you have generic telco challenges. Integration is hard, right? We have these giant brownfields, um, constantly putting new greenfield stuff into it. When do we just completely slice off a section of our infrastructure and run it in a vacuum and allow it to eventually grow and consume the old brownfield? When do we need to directly interface with the brownfield? Um, while I think we're doing a lot of cool stuff in the DC space, I don't think any of us are prepared to put like, you know, some of our like core routers in containers yet and in a stack of x86. I know Intel would love for me to do that. Um, but I don't think we're quite there yet when we're talking about like something that's got like, you know, 400 gig line rate in this pack and pushing hundreds of millions of packets a second, like tool sprawl. This is another big one. Um, and so once again to, um, Taylor's earlier disclaimer, when I think of the telco user group, I think of us telco users and our vendors who help us, um, talking about like generic challenges in the cloud native space, um, tool sprawl is huge. Um, a lot of times when we have like these different engineering groups, like what I'm at in charter, um, we're constantly doing like our little R and D phase doing something cool with like some piece of technology and we're like, hey, operations, you need to deploy this. It's awesome. And it gets to the point where, um, you know, we start having these discussions about an integrated vertical stack of kates versus the vendor provided stack, you know, um, look in the, um, I don't know if I pronounced that right. I apologize. If I didn't has put a lot of stuff in like the work that Deutsche telecom has done around building out their open stack in their Kubernetes environments in them. I would be willing to bet if we had him on the phone right now and maybe he'll be on the next call that tool sprawl is a huge part of this, right? Um, when you present like different interfaces, you put a different wrapper around the kates API. You change all the cube cuddle commands to like, you know, OS instead or some marantus command or whatever. Um, it's just more and more stuff that operations has to consume. And I think, um, a lot of times, you know, people who aren't from like either the provider or the provider vendor side don't track that a lot of times. Like in the legacy world of service providers, um, operations would hire for like a specific skill set. Like they really know June OS. And so we can put them in front of all of our Juniper routers and they know that CLI inside and out or they know iOS XR. They know, um, you know, we've got all this old ALU gear and like this person is like an out to tell master from before the mergers and stuff. And they're like keeping all this like 20 year old equipment growing for us. Um, so you get like this thing where like operations hits this point where they're just like no more new stuff, right? Like you need to consolidate and, um, this is a tough one in this new space. Um, I think this is one of the hardest challenges because providers can't run 50 different flavors of kates and expect operations to consume it. But there also has to be some type of concession to vendors on like, how do they maintain an SLA in a third party stack, a provider stack, or can they get to the point where, um, one second chat and please by any means, um, step in and interrupt me at any time. I'm just kind of covering some stuff I had because I was asked. I don't know where my chat window is. I moved it to a different screen. So I can't read it. Um, it's just about you want to reach out and just call it out. You're good. Go ahead. Okay. So it's just, it's just some math, mathematics about the bandwidth you mentioned, but like, I think, um, one important aspect of, of all of these is somehow creating some kind of. Industry standards or best practices and hero standards. I mean, like really something that is defined in the tag or in the, or in the CNS working group. Because there are lots of things. So QB is by default is a pluggable, uh, thing so you can have lots of different flavors of it and, and, uh, see if it, you know, tries to build up this, like opening a distribution, which means that, that is somehow, um, um, tries to fix this, all these moving parts in the infrastructure, but on top of that, we have still lots of, uh, different, not agreed things. Like, I know, like best example and most, most specific example is like resource naming in, in QB. Now we have the situation that every, uh, operator requires that a different naming scheme, and that's not really optimal from vendors perspective. And this kind of things. I think we need, we, we miss lots of these kinds of agreements in the industry. Yeah. I agree. Um, I will say, and I'm going to cover something like down here on the NFE specific stuff, but like one of the things where I think, like CNT is helping, like, one of the places I think you're gay that we still need some help is around like, um, Defined interfaces. Um, like we have the reference architecture and reference implementation, but like, There's still enough room like. For interpretation where I could set up Kate's wrong and still follow the guide or I could do something that breaks how you interact with it. Um, I'm hoping like the best practices would be something that like supplements the reference architecture. And I mean, I've said this in this channel, the other channel. I don't think we need another reference architecture. We have them, right? Like, I know how I can plumb multiple interfaces into something. I know where Kate's goes in the stack. Um, there's plenty of documentation to pull from. I think that next level. And so I'm going to really quick jump around here. Um, The soul 00 hash, you know, interfaces in Etsy. Um, This was like something where, I mean, there was a reference architecture that the Etsy Mano group put out early on, right? Like you have your NFEO, your VNF M and your VIM. And like, here's all the main pieces and the vast majority of us built those. The interaction between those pieces. Was not well defined. I feel like it's just now finally starting to get caught up, but like, There's a lot of ambiguity. Um, And we've already discussed that like, this isn't a standards body and we're not going to be able to dictate a bunch of stuff to a whole lot of people. For one, you'll never get charter AT&T and Comcast to build the same stack. It'll just never happen. You know, um, so figuring out like what ambiguity needs to be squashed out because it can't, you know, be covered in an API versus, you know, what is something reasonable that like, you know, a JSON payload or a TOSCA model or something would be able to reasonably account for those with some flags to like determine, I'm going down path, path X or Y because I'm, I'm not going down path X or Y. I can just tell you, we went down this path where, you know, the, the Mano stack, the NFEO and VNFM, technically they're two separate functions in the reference architecture and every single service provider. We wanted to pull those apart. We wanted best in class in FEO, best in class VNFM, you know, especially pre-on app days. And, um, you know, the API is for soul three and soul five. We're very, very, very good. Um, you know, the API is for soul three and soul five. We're very, very poorly defined at the time. Um, you know, the concept of, um, where network creation lives, it gets really, really like political on is the network tied to the life cycle of the VNF. So it's, you know, part of the VNFM is that a shared resource and open stack VMware, the cloud, wherever, if not, then it goes into the creation of the NFEO and you just get into these things. And then when we come and say vendor a, take your NFEO and put it on vendor V's VNFM. Um, I can just tell you through personal sleepless nights. Um, it's rough. And it's not really the vendors fault because technically they were building and designing the specifications. The problem is those standards and specifications. Left tons and tons of stuff open to interpretation. And so you can't really tell anybody they did anything wrong, but the integration cost effort just made it to where like, the fact that the outsourcing is NFE even worth it. Yeah, but that, that what we should prevent from happening again, I think. And and yes, I agree that big part of the problem was that the full specs were started very late. So for the ones who don't know how this. Ex-NFU specs are built the, like the full specs are the real specs, which are describing the, like the own buyer. All other aspects are just, you know, higher level stuff that you can interpret in a thousand different ways. So we can consider that's like the, what we would consider as an API specification, those are the sourcepecs and it's an indication that only the sourcepecs have an open API representation. And I'm just giving examples, right? You're like, I agree. Like the only reason I'm listening to these challenges is because I think that there are things that we could begin to address and fix now. I mean, the Kubernetes API, if you've ever just downloaded the API and read through it and get, it is like the most sprawling massive thing on the planet. But, you know, at the same time, we all consume it and we're semi successful with it, right? So I mean, like, I know it's possible for us to get this stuff in. And I think this is one of the things we're like the CNF working group. I think it's more of that focus is, what are the best practices for, you know, pushing something into Kubernetes, for consuming it from Kubernetes. And so this goes back to the tool sprawl comment too. And this is something, there's been a lot of lively conversation in the CNF working group. But the concept of developing using cloud native principles, a CNF versus putting CNF practices in place as an operator to consume, right? And if the CNF is developed with the interfaces and a good specification as you're pointing out, Geregay, and I've done my homework as an operator to have the infrastructure and the orchestration in place to consume it appropriately, then some of the, you know, vendor secret food can be put in place, but still be consumed. Like it gives the vendors a chance to still, you know, maintain, you know, their competitive edge, right? Like at the same one, you guys are developing, you're spending a ton of money on intellectual property, you know, on research, et cetera. So like, I mean, I know we're in the open source group right now, but not everybody just wants to give away all their money-making secrets, right? So like, how do you keep your competitive edge? How do you put some differentiation into your CNFs? But how do I still reasonably run it in my infrastructure without going down the souls one through five in like the, you know, 2015ish timeframe path? Cause, I mean, it just, it just didn't work quite as simple. So yeah, I mean, these are the like, and I think big picture stuff, this is why I stay engaged in the tug, right? Is to talk to other providers, to talk to vendors like, you know, trying to like emulate something that happened with Onap. Onap was like this, like 1.5 million line code dump at the beginning, it was all over the place, things work, they didn't work. I think one thing though that Onap can really be seen as a success is how like the conversation amongst those developers eventually evolved into them collaboratively coming up with these kind of best practices that Taylor's described for CNFs. Like, I feel like it kind of slowly happened organically over there where the collaboration and just like the, hey, this is a good idea. This is a bad idea stuff really started to, you know, influence that group as a whole for the better. A couple of comments. Yep. Manu is, at least the reference implementation is very tied into the canonical ecosystem and it's very difficult to bring it out of it. My bigger point is you have, we have all these NFE specific things that we require and I think we need to involve the cloud providers into this effort, because today we use AVX512 or SRIOV and, but we cannot natively use AKS or EKS or Google Engine because these, the specific requirements that we need, they're not available directly in the score. So we have to install our own Kubernetes platform or anything like that and on top of the cloud, even though they provided time, we manage it ourselves. So I think to be truly cloud native, it will be good if we involve the cloud providers and have these capabilities built into the cloud infrastructure itself. No, I agree. I mean, that was actually one of the very first things I sent to Taylor and Bill. And for one, these NFE specific, these are not saying things we need. I'm saying these are challenges. Yeah, yeah. Right, like they need to be. I'm talking about these things we are facing right now. I mean, we have the CNF vendors so we have these challenges right now. So yeah, I mean, that was one of my requests. I've been trying to bug guys like I'm Hawkin and a few others at Google. I saw Robbie floating around. I'm hoping Robbie will bring some of the core like AWS guys in, but I mean, there's certain vendors that are pitching the idea of running my packet course control plane in a public cloud and then running my user plane on-prem, right? So like to your point, like I really feel like if we're gonna talk about large scale, like architectures, best practices, infrastructure decisions, having both sides of the coin are important because they look at things differently than we do. And one of the things though is I feel like they also have a common understanding with us though on like some of the insane regulatory restrictions we have in place, like there's times where like, someone in like one of the CNF CNCF groups will be like, Jeffrey, why are you so hung up on network segmentation? And I'm like, well, because I have a legal obligation to do that, like it's not like within my like design matrix of deciding do I want it or not. So, but I mean, like we might get a lot of people from Amazon and Google that tell us we should never do SROV, I don't know, right? I'm like, that's why we want those people there. We want different paradigms and especially in the tug versus the CNF working group where we're trying to like give some like real tangible benefits right now and help people like deal with the right now. I really see the tug as like a place for us to be like, where should we be in five years? That's one of the drivers, right? Like my architecture in 2025 looks like what it, you know, does today just with different components than we've probably missed the boat somewhere. Yeah, I don't have to keep going through this, you know, I know everybody can read this. I'll just say like people can like come in here, add, delete these, I've put this in the tug's Google doc. You're not going to hurt my feelings. Some of this might not be relevant anymore. I mean, the big thing too, and I know someone put this in the CNF working group is putting some of those like benefits in some of our best practices, right? Like I'm always harping on the requirements thing, right? But like I'm obviously here and I spend a lot of time in these groups because I do see the value, right? I want uptime. Kubernetes self-heals containers, right? And we got into this discussion and is the uptime tied to the container itself or is it tied to something else? And in my mind, it's is my service, you know, reachable. I mean, the whole point of doing replica sets, you know, deployments using self-healing, auto scaling, et cetera, is because you're more concerned with like service uptime than you are infrastructure uptime in this space. And I'll be honest, like once again, around like the operational folks at my company, that's an interesting conversation for them to have because like in their world, you know, they are infrastructure providers. They provide network infrastructure, server infrastructure, virtualization infrastructure and their SLA is to make sure that like the app level always has availability. So like showing in the best practices like how that translates into a world of I'm someone who's actually providing the container and Kubernetes infrastructure and I'm less concerned with the application layer. That's like, hey, these three containers died and then rescheduled over on these servers, that's okay because you never had any outages for like the in-service that you're providing. Like I know that that's probably like a da comment for most of the people on this call, but you know, a lot of the legacy network server world, you know, it's not like something that just like pops right into their head because they may not be looking at the service layer. They're looking at a bunch of, you know, Kibana dashboards that just showed a bunch of containers got torn down and redeployed over here and like their stomach drops out for a moment. So anyways, I'm gonna shut up for a bit because you guys all know how to read and just continue to let the rest of the group guide the discussion on kind of what we would wanna do as far as like identifying some of the challenges we have right now. I even think that would be a good idea to like mark things of like this is a challenge that telco user group sees in general. And we think that maybe this is something to CNF working group or the CNF test bed might help us solve and we could even identify some of those things here and then leverage the other groups to help us with them. Yeah, I wanna circle back a little bit. The reason why we reached out to Jeffrey on this was we keep having discussions around what is the motivations and what are some of the whys that either whether it's an operator or actual doing operations, whether that's a service provider internal ops team or that's a group that's working with the service provider and providing those services where you're looking at from the CNF developer side and how to consume the resources or how to actually create the internals. So there's it's always comes back to what are the drivers? And we either get very, very high level that doesn't apply enough within what we're trying to move towards as far as like the cloud native type of thinking like what's our reason here or it's very specific. So the idea was to try to get something, hopefully a white paper that addressed those whys and maybe list some requirements. So some of the things that was discussed right now was items that maybe they don't work on a cloud provider and the cloud provider actually doesn't wanna support it. And the reason why they're doing this maybe because they think designing the application, I'm gonna say that what's actually running, not the specific, maybe not even the specific network functions or anything else, but maybe they're trying to communicate that the entire design should be different. Maybe not, maybe they just don't have the support. But if we can come up with a list of things that are those underlying needs like what the driver's section of what this document has. And then some of the specific current requirements, we could start mapping them out as Jeffrey was saying there at the last. Is this something that we're even ready for within one of the current groups? Like is this something from an applications perspective we can talk about in the CNF working group and say, here's a best practice already for that'll help with third party integration. How can you do that? What best practices help with that? It's probably not one, it's probably many things. But if we have this, then it could be applied wherever. It's probably a document that would be useful in other groups. Like I could see it for Anakit within the RA2 efforts. It would be a supplement. There's already been work there, but this would be more content for that or maybe the CNF test bed as Jeffrey pointed out. So that's a whole tool set for trying to experiment with various cloud native and Kubernetes technologies running on a base Kubernetes, vanilla Kubernetes that you could deploy currently to Equinex Metal was previously packet. But the idea there is take whatever out of this paper if we work through it and we could use it in many different sources. That also allows us within those groups, wherever that is to be more focused. So if we say right now we don't have something in the CNF working group, we don't have a best practice that covers this topic. So now we should probably reach out and say someone needs to go and start talking with whatever groups that may be Kubernetes network plumbing group. It could be SIG networking, SIG app delivery, whatever it is to start talking and bringing up these items, these concerns where we find gaps. And they go, oh, we need to start addressing that or maybe they bring up something that we didn't think about. But that's the idea. Make this a useful set of the challenges, the drivers, all the why behind it that we can use in all the other groups. Hey, Warren, Jeff, my name is Saik Ali-San. I just would like to share one thing with you, with both of you first. I mean, I have a presentation that may provide you with a little bit with an insight about 5G, a little bit about the core and then, and how you actually add alternative virtualized technologies on applications and how this is handling the environment of these alternative virtualized technologies without actually going to Mano. Maybe the Cloud Native Computing Foundation technologies is actually very much related to 5G slicing because when you start looking in the slice, sub instance, then it's very much connected to the set of network functions and necessary resources and then the computer storage and networking is added. Plus with the 5G terminal, you have support of eight UDP sessions which is actually simultaneous support to eight slices. Maybe I will share with both of you first a link to the presentation and if you find it that it might be useful that it is providing some kind of an insight about the telcosite of 5G and then elaborating, okay, how can you actually on top of that, you can add the Cloud Native Computing technologies. It's not actually deliberating on the network data layer in which the network functions applications where the context of the application data is separated from the business logic and it is stored as a structured and unstructured data. And when the network functions actually are providing services that can be both consumed and produced at the same time. And maybe then you can get a little bit a whole picture. I mean, for instance, if you look in Germany a year ago, the 5G license had been issued and now within one year, there had been an application for 88 private 5G licenses. The question is why? But you see, I will drop you a mail to both of you with a link to a presentation if you find it to be useful about the telco and a little bit, you know, the connection to Cloud Native Computing technologies. Maybe, you know, I can present it, or if not, you know, we continue with the work. Can you share the presentation with the rest of the group for us, so please? Yeah. I would be happy for you to drop the link right into, you can drop it right into the meeting notes. You're meeting the chat. You can drop it in there or the Google Doc meeting notes. I'll drop that into the Zoom chat as well. But if you put the link to the presentation, then I'll put it in the public meeting notes. I will try now right away because it is on the slide share actually under my name. Sorry for this, guys. And I think it would be great if you want to talk more about that or we'll just say within the talk of Music Group talking about the relationship between 5G and Cloud Native. What I'd say is, and you can go read about this, it's mentioned many times in different articles and stuff. 5G has adopted many of the methodologies that you see in Cloud Native. And Cloud Native, of course, is a aggregation of many different principles and methodologies. So whenever 5G was created, it wasn't siloed the thinking it was going out and saying, what are all the industries doing and what can we do to move from where we are to the next version of this? And so 5G actually encompasses many of those things. Of course, go ahead. No, you're absolutely right because in this presentation, the link that I sent to you now in the chat, it starts with 2015 NGMM paper. And in it, 112 pages, I think, or 113, there is 55 references to the edge when it comes to the cloud edge or to the cell center and the cell edge. But then in 2017, as 3GPP start developing to release 15 and then in March, the Etsy Mac renamed mobile edge computing to multi-axis edge computing in, I think it was in March, early March. And if you look in February, just the month before that, 3GPP actually made three revisions of release 15 and they made some changes when it comes to the mobility. They made some changes when it comes to support for the multi-axis radio access technology. Suddenly it's not anymore, only the 3GPP radio. It's also Wi-Fi, internet, you know, Bluetooth. They actually provide some definition about these multi-rap technologies, availability and reliability. And when you start looking on top of that actually, how they enhance the 5G when it comes to something that is called, you know, local traffic, routing and service theory, then suddenly you may get this idea that the problems with whatever they defined with edge in 2015, it is actually resolved in 2018 and 2019 through some enhancements in the capabilities, not functions and features, but the group of functions and features that provide capabilities. For instance, you have a support of the five core network that you actually select certain functions and features and move them to a so-called service area and define it as a tracking area and provide service in this. And you can define not so more, not any longer the throughput, but it is about latency. Suddenly the customer user experience is not any more defined only with the throughput as it was with 3G, 3.5 and 4G. It is now a combination between mobility, latency and throughput. These are the three actually variables that define the customer user experience and mobility is not any more, only your terminal, your cell phone. Now you have four different types of mobility. You have units that are stationary during the entire life. You have pneumatic, you can move them, but they're stationary when they're active. You have units that are within a constrained area. Think about cell driving cars that actually have a predefined route. And they're going only through these. And if you look at Germany, Mercedes, Bosch, BMW, Volkswagen, they're getting private 5G license. And then you have the fourth group of mobility, which is your cell phone. And then on... I think that this would be a good followup discussion. We have 10 minutes. I wanna make sure that we have enough time for anybody's feedback. I do think that I would probably include from... If we're gonna take this from a, what is the Telecom user group? So the Cloud Native Computing Foundation Telecom user group. What are we trying to do? And I would take that perspective for how to pull in 5G. And what I would look at is, this would be related to transitioning. So how are you going to transition any brownfield to start taking on anything, any new best practices, any new technology? So there's some people that would wanna embed it within there, that's fine. So some people would say, here's what we have, but we wanna move to something else. Either way, it's a transition. So you're trying to map the terminology and understanding between these. And this seems related to what Jeffrey was doing with regard to telco challenges and drivers, but a very, maybe a specific one. So we probably could even have a new white paper that's just focused on how do you relate 5G, what's currently happening with regards to any type of, whether you say Kubernetes native, like going towards something that's more Kubernetes native or Cloud Native in general, how do these fit together? That could probably be a white paper in and of itself, but it's at least a bullet point within what Jeffrey was talking about, challenges. So you say a current service provider is already deploying 5G technologies in a 5G network and starting to utilize these in various places. So how do they do that while looking at potentially new, maybe even conflicting processes, methodologies, as well as technology, how do they merge those together? I think that would be a challenge that would be listed within the white paper that Jeffrey was putting forward and then maybe a more extensive, but I'd be happy to hear more. I can, I'm sure that other people would like to discuss this more in a future meeting, but does anyone have any comment? I wanna, we have nine minutes before this meeting ends. Does anyone have any questions or comments or either what Jeffrey or Ek was talking about? Oh, maybe just a quick comment. It was very interesting. Thank you, Jeff. I guess my quick comment was you made a reference about APIs being very, very large in Kubernetes, very sprawling. I'm not sure that's the problem or if I agree or if it is a problem, I think there was a misunderstanding of what APIs actually do in Kubernetes that we need to change. One of the things that Kubernetes does- One quick thing, Cal, just because I don't want you to go down like, I'm saying like, despite the fact that it's big, it's manageable and all of us have figured out how to consume it. I was saying in the NFV space, there's lots of weird siloed and vertical APIs that look for very niche things, require a lot of finite knowledge of what the end deployment's gonna look like. It's in one of the, like the drivers is getting to declarative deployments, like, so yeah, it's actually the opposite of that. I'm saying like, despite the fact that the API is very like large and all-encompassing, in my opinion, it's pretty consumable from a Kate side where in previous NFV stacks, you know, like some people are using Swagger, other people are using homegrown APIs, like it's just kind of all over the place. You as the individual consumer, better know in granular detail what's going in that NSD. You know, just the layer of abstraction that Kate springs through its APIs, I feel like, is one of the reasons why it's been so successful. And so I was like actually saying, we should just figure out how we continue to emulate that and avoid some of the like traps we got into in the previous iteration with NFV. Sure, yeah, we're definitely on the same page here. I think what I'm trying to fine tune is the language to use, because the way I see it, it's the paradigm has shifted from talking about APIs to a scheduling paradigm, which is a declarative. And maybe that's what you mean, right? If you look at kind of, I think even in the Kubernetes documentation, it is called APIs, but they're not really APIs, they're data structures, right? That eventually mostly are used, expressed as YAML manifests, of course, behind the scenes, it's, you know, Go creates these resources on the API service. But I think that's the shift, right? If you compare, for example, Kubernetes to OpenStack, right? In OpenStack, you do have APIs for the various services. If it's Nova, if it's Neutron, all these services have their own APIs that are documented. But in Kubernetes, that's not the important part. Kubernetes is itself extensible, the API server is actually fairly simple. In the end, it's those resources and those data structures. So it's a shift in language, but it could be important because in some of the groups that I'm working on, if you look at the work being done in O-RAN and other groups, there's a lot of people bring up this issue of, yes, we need to specify the APIs because we deal with open APIs. What I'm trying to do is shift the conversation to what I call open models instead of open APIs. APIs have become less important in the Kubernetes world and that's a good thing. Anyway, it's just a little comment, hopefully supplementary and helpful. Sorry, Tal, you're actually very, very right because you have a shift actually from process-centric to data-centric. I mean, if you start looking how you will utilize machine learning from closed loop automation and you actually connect different architecture because you have Ghana, Etsy's Ghana, you know, generic autonomic network architecture also with Etsy any experiential network intelligence. Once you have in the applications the context data related that is separated from the business logic data of the applications then whatever you actually underline, Tal, is strategically important. Particularly when on top you have the cloud native and this is the agnostic communication between different architecture but not based on processes but on the data structure, the data characteristics, the data granularity. Yeah, I think, oh, sorry. And then you have a network that is actually divided into subnetworks to subnetworks and then that's very much that is steering the data because we are gonna be in everything self-driving cars, all the units, the sensors and we are gonna be continuously generating data. I agree entirely and of course, and I'll also point out that this change has been going on for a while to move to data. For example, Yang, Yang models are the interesting part, not the NetConf APIs, right? Or if it's REST Conf, that kind of decision is not the important one. The important one is the Yang models and Tosca too, to an extent, the inroads that it make in is the way of modeling our resources in the various clouds in the various layers. So Kubernetes, I think, fits in very well in this move into this data paradigm. So anyway, it may be exploding a little bit on a comment that you made, but hopefully it's a supplemental. No, it's good. So one of the things that I struggle with is like what isn't in scope? So here's the thing, right? Like what you're talking about, tell me and a couple other developers in our company, we've been pushing really hard on the concept of open config, standardized Yang models with our own little translation in between to get rid of that tool sprawl. So we write our own common data structures at the top. We do it mostly in Yang with some other modeling languages as well. And then we push down the corresponding payloads. I think though, like kind of hand waving the APIs away, like you said, Yang and how you structure and build services is way more interesting than NetConf itself, but at the same time, without like the transactional nature and like the interface that's provided for you, like you still have to have something that can consume those data structures. Like not every tool is capable of consuming a data structure the way that you would want to push it. So like, I don't know if I fully agree that like, the APIs have become that trivial. I do agree though that the standardization should be focused on how we structure the data and how we present things, getting away from like scripted automation towards here standardized models that like have well-defined values and fields, et cetera. But I just, I mean, I've pushed Yang into things that do things very poorly that don't have the concept of the transaction. And like, if you don't have that, then you get into these issues where like, I model how I want BGP to look, but there's a lot of CLIs in a lot of different, you know, network operator platforms that like it does not accept transactional configuration in that manner. Like you have to go in and turn BGP on as a process before you can then configure BGP. So doing that, you know, in a data model without the interface of the API or the unscheduler, all of the necessary componentry involved means that my data model falls on its face. No, you're very right. But I'll point out that, you know, if the topic is Kubernetes, right? And then Kubernetes, if you implement that using the Kubernetes operator, you turn it into a difficult protocol transactional challenge into something that the operator would do the heavy lifting for. And you just declare a custom resource in Kubernetes, that would take care of it, right? At least that's the- Sorry, isn't it our goal to be, this is our to be state, where you don't have the transactional model anymore? I think all of these are discussions that I'm hoping we solve in these two groups. Okay, we're at the top of the call and thanks everybody for the discussion. We're gonna switch over to CNF working group for anyone that wants to join us there. And I'll, I'm gonna drop the link for the meeting notes and the chat if you don't have it. See y'all there, thank you. Thank you, bye-bye. Thank you. Thanks, bye.