 Excellent. I think we can get started. I think. Okay. So, Agenda, what do we want to discuss today? So, I think that we first made, say, a couple of sentences about last week. At least for me, it was very... Oh, yeah. That's perfect. Yeah. So, I'll kick it off and finally add... This for me, it was very useful. Oh, sorry. Oh, no. Go ahead, yeah. Please, yeah. Go ahead. At least for me, it was very useful to meet, like, a good part of the community there at ONS. And I think that we had a lot of useful discussions and a lot of things to follow up. So, one of the things that actually was mentioned was... I think, Jeffrey, you are kind of initiating this about the use case with the emergency and these things. So, maybe this could be one of the topics that we can bring up today. Yeah. Certainly, nice intro, Nikolai. So, from my side, so we had the nice panel. Of course, besides the NSM talks on Kubernetes or the network service orchestrator, sort of a little bit of up-leveling of NSM, given that it's half an hour, we never really could get to NSM. But overall, it was well attended. And also, to add to Nikolai's... So, basically, stress on the use cases, especially coming from Jeff, the other one I would like to add is essentially around sort of the T-shirt files infrastructure profiles. Basically, this is nothing new. This is very much what the cloud providers do. For example, they have fixed VM sizes and also even hardware configurations when it comes to networking or computing in terms of FPGA or SRI OV. So, there is an effort called OVP, which you may all know, which is very much pushing this for the VM path. But now you want to make sure that we are also doing the same for not just DNS, but also CNS. And also bring out any certain differences with the CNS may have. I had a brief chat with Taylor, who heads over the CNCF testbed calls. And I think he is very well aware of the topic and basically said you could be a very good joint follow-up in their calls too, which occurs essentially every other Monday. So, that's just next Monday, so on and so forth. We don't go all out with them, but sort of compliment. Hey, guys. Start with one of the other things real quick without me and I'll talk about my use case as soon as I get back. I just need to check in with my boss real fast. Okay. Prem, anything? Yeah, sure. On ENSM especially, on the demo, you had a nice demo. So, yeah, there was a lot of interest with respect to the demo. People were keen on NSM and also they were looking at what is the benefit of adopting NSM. Of course, the common question that was asked was how is it different from service mesh, right? So, I mean, a lot of discussions around NSM, I would say, in all the three days. So, and the other thing is there are also interesting use cases. For example, why not we extend NSM concepts to that of both VM as well as PNF, since they are getting exposed to CNSM. Can we chain all of it together? One question people were asking. Yeah, overall, it was quite a lot of learning, a lot of interesting questions, I would say. So, anybody else? Anything on ONS before we jump into any possible agenda topics? So, if you're interested, I'll probably can showcase what is that we have demoed during the ONS. Oh, yeah, no. Let's do it. So, basically, yeah, the NSM demo. So then use cases from charter, then we can say whether the interrupt or fall, right, based on time permits. Perfect. Yeah, I think one. Okay, Prem and Ramki, before we jump into the demo, I just had one or two questions from a use case point of view. Is NSM worrying about VNF deployments as well? Or is the use case assumes that they are all CNFs. There will be a period of time where you will get into VNF less CNF. It's not that the carriers are not, don't see the value of CNFs, you know, in terms of the overall change management and all that. But from a reality point of view, I see the overall service mesh, you know, the conceptually it can be applied to VMs as well. And if you see the Kubernetes LibWord project, it does allow you to manage VMs as well, right? The high level question I have is from a use case point of view, should we define a hybrid of CNF and VNFs coexisting and achieving a higher level service goal? I truly believe so. Pani, I think that's a good question. So the one question is, as you very well know, regarding VMs on Kubernetes, there are multiple options, right? I mean, there is QBert, there's also Wordlet, which is very lightweight. So basically, so now the question is like, how do we create that alignment, right? And there may be more coming up, right? So, you know, so basically how do we put it all together? So in that sense, we didn't deem it as like the highest priority topic, like basically get our CNF story straight, you know, and then as we progress, make sure that we are aligned completely on the VNF side also, you know, from a Kubernetes perspective. But definitely if you have something right away, which something like a common aspect which can be used across these different realizations, I mean, after all, all of them use the word, right? Wordlet, you know, so we should then definitely look at it. Because at the same, the same observation about infrastructure profiles, right? So basically, they're coming from the VM lens. So now, so there's a nice GSMS pack, I'll send it to everybody. In fact, put a link to it in, you know, our document. So basically, it's funny though, it's coming from, I think the EPC probably vendors started it in GSMA, but it's very general, right? It's nothing to do with just EPC alone, right? Those infrastructure profiles. But now, yeah, we do want to make sure it's applying to CNF, but now we're thinking the other like, whatever you're doing for CNF should apply to VNF also, right? Yeah, especially the service level aspects, the service changing aspects. You know, all those are independent of the virtual compute, whether it's a VM or a container. Yeah. And that's fine, you know, if we take it in a phased approach, let's get the details worked out for a pure CNF, and then we can, as a second step, look into that. That's also fine, you know, I was just wondering if that topic has already been discussed, you know. No, we haven't gone through that detail. So basically, at least near-term, I mean, just immediately the answer is our answer in any of these use cases, the ENSM. So basically saying, hey, you know, we can inter-work with, for example, an open stack based solution, right? For VNF, or PNF, like an ODL red, which was the crux of the demo from Prem, right? Basically, we'll create the intro working as priority. And to your point, funny, like, yeah, we will start looking at this, you know, from a common perspective. I'm thinking more like just let the different initiatives settle down, like, you know, but I think I've been, I've heard support, especially, Meranto started with more than Meranto's Intel is like a big fan of it from, you know, from a lightweight perspective, extremely lightweight. Qbert is a little heavy, I believe, but it's very rich in functionality. So. Yeah, I see. Okay. Yeah, just to add to it, I agree. One other thing is at least the non-CNF functions are looked at as an NSM entity within NSM. So, and what we are trying to do is we are basically trying to ensure that they participate in all the translations needed from that of the NSM to that of the non-NSM well gets done by via the ENSM. So that's the whole crux. So from a service training point of view, how do we do it is what we would probably need to look at it. How does the construct gets translated? For example, if it's NSH based or non-NSH based, how do we get it done in a VNF on this one? Yeah, makes sense, Prem. I think the gateway model is a good transition. It's just that I have first-hand heard from carriers that going forward, they would surely like a single orchestration and single lifecycle manager for their VNFs or CNFs, you know. Yeah. Yeah. Sure. So with that, let me get into the demo. So, Prank, you would probably need to share. Yeah, I'll share. Yeah, please. Yeah. Thank you. So let me know if you're able to see my screen. Yeah. Okay. So what we have done is we have essentially developed a stream layer which can be thought of as a ENSM to begin with between Open Daylight and NSM. I believe all of you would be familiar with Open Daylight. It's basically an open source SDN controller. And what it does is it provides the ability to support multiple south bones. For example, OpenFlow, NetConf, BGPLS, PCMM, and any other thing that you can want to add, right? At Lumina, what we have done is we have also extended it, the framework to support physical network functions. What does it mean us? We have basically a GSN RPC endpoint. And you can define an open config yang which represents a device and you can perform the operations on the open config yang and behind the scenes it gets translated into the CLI. By that, what happens is you can use Open Daylight to be the single platform to manage any device, whether it's going to be programmable or non-programmable. And that's with the same framework what you have done is we have essentially extended it to that of NSM. What it means is I was mentioning that open config for physical devices, but you can bring in any yang model and you can define the operations that you want to perform on it. And we have the translators that would essentially translate it, right? So similarly, what has been done here is we have basically the NSM part of the Open Daylight integration has two parts. One is into the NSM, it can talk GRPC. On the other side, it talks GSN RPC. So that's the whole essence of it. So what you see is essentially the yang. So all of the yang gets mounted as a yang endpoint. And here you see NSM to be yang endpoint. And from a network function perspective or a network service perspective, we have taken the sample network function, which is the ICMP responder that has been created by the team. And what this particular ICMP responder does is it basically creates a pod, injects an NSM interface to it, and then it provides you the ability to ping those interfaces so that you know the interface, the NSM interface that was been created in this pod is active. So that's the intent of this demo. So we have created four operations. One is create interface, which would essentially be invoking the create part of this whole ICMP responder, showed essentially show the interfaces, which means the pods and the respective interfaces and the ping would essentially ping those. With that, let me get into it. So basically what happens here is these are all JSON endpoints and then the operation is the JSON operation, which gets translated into GRPCN goes to that of the NSM. So for now. One clarification here. So the ICMP responder is running in NSM, that is the pod in Kubernetes, right? Yeah, that's in the NSM contest. Oh, and the endpoint is just abstract endpoint, which will just receive and create the interface on a switch, the, the, the, the NSM. So what happens is it essentially exposes that endpoint, which to create and so from the NSM perspective, it's in the endpoint, right? So it would essentially be the endpoint is nothing but the ICMP responder endpoint. It would essentially be providing all this facility. Did it answer your question? Oh, so you are, you know, my only confusion is Oh, it's an equivalent ICMP responder endpoint on the other side, on the NSM side. It's a, it's a pathway equivalent. So let me, let me take an example. Okay. So for example, let's assume you have a VPN. It's executing in the NSM context. VPN would essentially expose those operations to the outside world, which is essentially the endpoint. And then if it's an NSM client, you would essentially follow the same mechanism of trying to look for the particular endpoint, get the endpoint, and then invoke the endpoint, right? So similarly here, what happens is there has been a client, NSM client that has been developed, which would essentially be triggered by the ENSM, right? And it would talk natively to this in the NSM context, but to the outside world, it would essentially translate and give it in the format, whatever you want. Okay. Right? Yeah. Okay. So let's get into the demo. So to begin with, I'm going to just show the interfaces. So as you see, basically what it has done us, it has essentially executed that call, and then it shows what are the endpoints or what are the parts that has been created. So there are like these three parts that I've been created. NSC VPP means it is basically via socket and VPP agent means it's via the memory interface. And the IP what you see is not the CNI IP of the part, but the NSM IP. Okay. So each of these parts would essentially have the NSM interface created in it. NSM creates the interface and then assigns an IP to it. Okay. So this is what you see. Now we can ping this interface. So my question is on, is it really a point-to-point interface between the NSM part? And here it's more of a, just you are just creating any interface which is open from it, not necessarily point-to-point. For, okay. So now what happens is this example just creates an interface within the part. Right? Okay. So it just injects the interface into the part. I see, I see. But also interface, but it also means that that interface is globally addressable. That's how this is, I guess. That's right. Yeah. Okay. And Prem, is this interface visible to the existing CNI or are you replacing the CNI? Okay. So as per the NSM, the CNI will not be touched. Right? Okay. So hold on a second. I will see if I can log into the pod and show it to you so that you can see what exactly happened behind the scenes. I'm going to stop sharing and then... Prem, I'm basically trying to figure out, can you make this work with the existing CNI constructs? Yeah. So, okay. So to answer to your question, the NSM believes that the CNI will continue to exist as is. And then let's assume you want a interface. It basically injects an interface into the pod. And is that injected via the CNI or you... Independent of CNI. On top of CNI. Okay. Yes. So they are independent. Right. So you're still using the CNI to insert an interface, right? Well, when you say... So here is what it is, right? So for example, this particular... Okay. So when I say CNI, you're talking about reaching out to the pod? I mean, I'm presuming each pod already has an IP interface by which that pod... No, in addition to that of the IP interface, there will be one other... Yes. In addition to that, you will have this interface. When you say show interface or IP address, you will see CNI interface as well as the NSM interface. Got you. And just for my benefit, Prem, why do you need multiple interfaces? That is a requirement. So for example... Okay. Let me probably give a bit of background about NSM. Probably Nikolay is the master in how things work. I can give you more from a use case perspective. So the requirement, what we... how the whole NSM started was basically what happens is in case of a typical telco workload, you for sure need multiple interfaces, right? So today, in case of the Kubernetes pod, you just have one interface, which is the CNI interface, right? That is one. Second thing is, for example, many of the telco workloads would essentially need to have their own non-IP channel, which is essentially, for example, you will have an MPLS LSP or any other signaling thing, which cannot be done via your conventional IP interface of CNI, right? With that, the whole concept of the multiple interfaces, as well as these interfaces or just interfaces over which you can essentially use it for MPLS or any of other interfaces, what you need. Okay. So this can be both control and data plane. It's not really limiting itself to just only service or anything like that. Yeah. Primarily it is data plane. These interfaces would be data plane interfaces. Or as you said, you are right. It can be used for control plane as well as data plane. Yes. I see. But in the examples, whatever you see in the NSM, what happens is this particular interface is the one that would be essentially used to connect with that of your client for the data plane purpose. What it means is the VXLan would be created using these interfaces. There will be similar injection that will happen on the client and this is all done by the NSM infrastructure. I see. So Prim, one request since I think Jess is back. So basically, I was thinking, so basically if there are any detailed questions, perhaps you should follow up separately. Otherwise, I'd like to sort of, I mean Jess has some interesting ideas on the near term use cases like this. So basically we want to make some progress on it in the call. The next half an hour. Yeah, sure. Yeah, definitely. Yeah. So let me, I just want to probably show you something here. Give me a second. Was your demo recorded at ONS, Prim? We could look at it offline also. No, we haven't. So pretty much this is the whole demo. Okay. So you see in the ping that this, the end point is essentially ICMP responder NSC. And then this interface got pinged and then you got the response back. And then the other thing is the other part is essentially create interface where that shows all the interface, but for benefit of time, I'll probably take it offline depending upon the interest and I can showcase the pods and the interfaces within the pod aspects like that. Okay. So this was the demo all about. Yeah. Cool. Over to you, Ramji. Thanks. So just, so do you want to talk about some of the potential near term use cases? Sure. This might isn't specifically the one for like my individual team, but just I think as like a, a very compelling use case in the provider space in general. I don't know if you guys watched the key notes that on us last week, but like AT&T did their like emergency response, like little video, right? Like, and it just go has like this specialized tag team that does this and charter we have this, but in the event of a natural disaster, you know, obviously infrastructure and services is one of the like key things that has to come up very, very quickly. And it's incredibly variable and difficult. And I was thinking a very simple use case of showing us like using something like an ENSM sitting on some small cells or, you know, some access points, you know, Wi-Fi access points, whatever, stitching directly into a provider network. So like, say like charters access network and it dynamically provisioning ports and scale for the increased load, I think would be super compelling. I haven't had a chance to like diagram this out. So it's kind of hard for me to try to just put this into words, but imagine like, you know, a tornado goes through rural Alabama and the town is packed. I mean, now we can use this right here too, right? Just kind of thing, but imagine as opposed to this more like static enterprise use case, like what we're trying to show is tornado hit the town, infrastructure's gone, you know, cell power's been taken out and we have all these responders in the area who have no connectivity. If we could show a use case to where we can bring devices online, like dynamically and as they're coming up, you know, these would like essentially be client. There would be pre-established end points in the access network in, you know, both the ran or the, you know, the Metro ethernet side of the house where they are presenting themselves as end points and accepting, you know, connection requests. And so you could basically just literally plug devices in, come up on DCP and then request your network service and suddenly bring like an emergency response network into being like largely through like a declarative network service. Does that make sense? Yeah. Yeah. That's a very good use case, Jeff. Yeah. No makes sense. Yeah. This is sort of, this is a generalized depiction of the, I mean, edge computing use case didn't get into specific details on emergency response, but generalized edge computing, you know, how everything can happen in the edge. So basically there are two clouds, you know, you have the ran cloud on the left and then you have the edge computing cloud on the right. I mean, these could all be collapsed into one cloud, but you're spot on. I mean, yeah, nice fine-grained variation of this, right? I mean, the emergency response response to use case, how you set up the service and demand. Right. And it's not just the, it's not just the, you know, the multi-tenancy aspect of, you know, bringing more and more consumers online, specifically to using something like an EMSM short-term, such as like Prim's ODL model or writing, you know, the abstraction for network service manager to live on a PNF itself, but the, you know, QOS and the bandwidth provisioning I think is where it could be really, really dynamic, right? Like if an emergency's happened and we start putting up these small cells or these wireless access points down in the field, the actual, you know, access and like core networks prioritizing this traffic to make sure that emergency responders are, you know, able to communicate with services that are needed. Exactly. Sorry. Are we thinking, you know, creating it with like an SD-WAN style where they, they set up a box and it all just works. Yeah. That's exactly what I'm thinking is like basically using an SM to create a more dynamic SD-WAN, right? Like SD-WAN still like largely, at least all the major vendors, whether it's Velo Cloud, Ziptella, you know, basically everything, but Meraki is still a very traditional top-down, you know, centralized orchestration model. Versus going with a more distributed declarative model. And it's exactly, I think you and I talked about this. That's exactly what I was trying to kind of articulate with basically a mobile SD-WAN like on-demand type thing. But if I understand that it's much more than, I mean, the, just the plain SD-WAN, the plain SD-WAN as this today is not, doesn't have the mobility aspect in this case. Like this just will also bring in the dynamic. You know, several vendors that'll, you know, do like, you know, you can put like a 5G on it. But to your point, Ramki, it goes a little bit beyond that because this would be us potentially setting up the small cells as well. Yeah, exactly. That's what I mean. Yeah, yeah. Yeah. But yeah. This actually goes to like the root. Go ahead. Yeah. Yeah. This goes along really well with my understanding of emergency services because interestingly, the emergency services don't generally want to use. When, when you're sending people out on the field, you don't generally want to use voice because voice is very inaccurate and be misheard or so on. So it's very common for these type of things to go over digital modes and something, if we are able to incorporate a 5G component into that where things just automatically work, that'll help them send a lot more data through such as instead of just text, they'll be able to send photos effectively and other and sound and other, and other mediums in a digital format. So do we think, do you want to just double click on this? So basically I have the PowerPoint for this. Do you want to take it and basically just craft the into an emergency responder use case and say, hey, here are the network functions and basically double click on this, you know, so that's thinking. Sure. Yeah. I'll make like a copy of this and work with them. Nikolai and Frederick on kind of expanding out like what we were discussing last week. Yeah. I mean, I guess it's basically like a, if I was going to sum it up into a single sentence, it's a declarative radio access network with an SD way and component. Perfect. Cool. Yep. I am, but I would also be happy to help in any way. So I'm putting the end to end together. So much appreciated. Um, so in the, uh, first, I mean, the sort of use case, um, just some thoughts. So, um, I was thinking everything is dynamically instantiated, like including, uh, the ran component, basically small cells being dropped on the field and I mean, the whole network is, the new network is being created, uh, including ran, is that the soft surface? I think long-term, that's where like the real value is. Um, I think starting small first and getting the SD way and component up and then, I mean, or we can go in reverse and just show the dynamically provisioned way and, sorry, ran, um, either or, but I think the story is not complete unless you have both ends of it, right? Like, if you could, um, set up a small cell and instead of everybody trying to individually go through their phones, we have, you know, like some type of 5G CPE, that's SD way and capable, that's, um, you know, reaching into the metro and asking for, you know, QOS and, um, capacity and it's doing internet offload, right? Like it's basically tunneling straight to like one of the pops and then it's out into the great blue, you know, in whatever emergency services the on-site personnel need, I think is, um, kind of like in my mind what we'd want to show, right? And like to Frederick's point, right, like there's so many different components, um, you know, like tons and tons of data needs to be shipped, um, you know, people who are doing triage and stuff are probably sending photos in real time to emergency professionals who are not on-site, you know, helping them assess injuries, you know, um, you're going to take pictures of structural damage and you have people there on-site who don't necessarily know if they can like pick X, Y, or Z components from a fallen building off of somebody, but without, you know, causing more structural damage, like, I don't know, it just, it's a very, very chaotic and dynamic thing. And if we could show that we could make the network portion of that less chaotic and die, you know, and more dynamic than, you know, just like really all you do is like say plug this device and come on the network. And then, you know, we have a program so that it requests network service X, um, and that's pretty powerful. No, makes sense. And what is interesting is, so though the, I mean, the 3GPP has like, has so much of fancy acronym, the GPPU is nothing but like a clean UDP packet. It's very simple. So, uh, you know, um, I think if we took a demo like this to, you know, KubeCon US in, um, what is it, like November, December, like people's jobs would drop. No, no, no. Perfect. And that I think the Iran, sort of the, um, I feel that starting from Rand's small, small, extremely attractive because it sort of gives it end to end rate to basically the dynamic, uh, Rand creation, right? Uh, to a point. Basically. And, you know, GDP tunnel, this is nothing fancy. So, you know, um, so Jeff, um, funny here. Um, I, I actually like the use case, the way you have defined it. And it's a very valid one. Um, just wearing the technical hat. Um, does it make sense to, uh, obviously it's two pieces, right? One side is the SD van piece. The other side is the radio part or the, um, the radio, uh, Rand part of it, you know, if you look at the three GPP spec also, they clearly separate the radio access from the packet access networks, right? The SD van kind of starts at the packet side of the overall, into a book, right? So, um, even from a use case point of view, would you agree if it is like bringing on board a new SD van site, uh, but, but from a use case point of view, you bring some good points as to what kind of dynamism and special requirements that particular SD van site need to, uh, support. Uh, and how can we make that integration with the, uh, 4G or 5G LTE network, uh, um, more tighter and so on. Um, does that make sense to kind of separate them into two pieces, but obviously define specifics about how tightly these two can be potentially integrated. Yeah. I mean, you're going to have, you know, network service managers that are unique to each section of this, um, I think from a development standpoint, you can develop them independently of each other. I think for the end to end, like showcasing of it. Um, like you said, it would basically be like a new site activation, but specifically as opposed to a wired connection, um, it would be a 4G or a 5G enabled, um, CPE, right? And so you drop a small cell into the disaster location or multiple small cells and those are dynamically provisioned into the ran. And then simultaneously, like once that's come up, you bring up a, um, you know, LTE or 5G capable on CPE. Um, because the thing about SD-WAN is it's really hard and like, despite the fact that lots of really smart people have been designing solutions for a long time. I mean, I have lived through the tele deployments, first deployments and launch deployments. I mean, they all have their strengths and weaknesses and there's always caveats, gotchas and pain points. And so, um, you know, basically like having like a land that you create that's tying back into both the radio access network via the LTE antenna on the CPE, which is then helping you back all of your traffic to the actual internet itself, I think is, is cool. So, um, to your question, because I tend to always talk and have a bros and roundabout manner is from a technical standpoint, yes, I think that they can largely be, you know, their own standalone use cases, but then also showing the integration between the two is like where, you know, the powerful message comes in. Um, exactly. Um, just to add to that, actually if you look at this use case, it's almost like the left side, like the BBU and then the 4G SKGP, this is sort of your, your RAN and then, so basically the ETC, then you do a handoff to the packet network at this point, you do a handoff, then you can do basically the coming, uh, you can directly handoff to an edge computing application here. Right. Um, you know, maybe there could be a first standard that, that edge computing application could be a first standard responder application right on the CPA device itself. Or it could go to internet, right? So that's sort of the, and Romkey, the EPC itself can be a CNF, you know, there's nothing stopping it. So it could be managed. Yeah, exactly. Yeah. So, uh, literally all this, this was trying to demonstrate a distributed edge cloud use case, but all of this could be collapsed in a box itself. Right. Totally possible. Everything. Right. And then with a outcomes, like for an SD-WAN interface, right? In, in is basically a radio out as SD-WAN. It's all in a box. Yeah. Actually, uh, I think that is where I believe, uh, the 5G comes in handy, right? So the thing is the whole responder can sit within the premise of the, um, the RRH, uh, or along with the, any app that sits along with RRH, so that it just can respond depending upon the information it has. Yeah. This actually nicely ties with, there's a very nice, uh, sort of a presentation from, um, uh, Verizon and Intel on sort of a CTE, uh, you know, Converse CTE. So I think this falls very well in place. Sort of, you know, CTE in a box, which does both the network function processing and also application. So maybe sort of first responder in a box sort of, you know, we could visualize it, but of course with the internet phasing interface, it's got both, uh, the RAN or the wireless component on the left side, right? On the right side of your applications and to internet. Yeah. I think, uh, just to summarize from a technical point of view, you know, the SD-WAN as a branch router, uh, if you visualize that as a CNF, VNF that combined with a, EPC, whether it's serving 4G or 5G, you know, these two VNFs coming together, I think there'll be a lot of interesting use cases. That's correct. Um, uh, Pani also would like to add the basically, if they're bringing up RAN, then whatever needed for RAN, like basically the BBU component also, the BBU, EPC, because they're all separate, right? Like the 5G, Yeah. Yeah. Yeah. Now would that, uh, the BBU part is that, uh, in your picture, it's already containerized. Um, is, um, so it could be, it could be container or VM. Um, at least in each, each component could be container VM at least. I've seen more of a VM based. Um, it depends. I think, uh, it, it all depends on sort of the latency requirements. Uh, so some are not virtualized at all. So basically, if you want like ultra low latency, you can virtualize the BBU itself. So basically that's one model. Or you virtualize if you're not, uh, critical on latency. So, uh, you know, it all, I guess in this case, uh, probably, uh, the first responder latency is a very critical aspect, right? So, uh, so perhaps, uh, and that first, the L2 part, uh, will not be virtualized. Yeah. And that first responder use case, uh, you know, it's a very interesting scenario. I've, I've seen as part of my new Nokia days is that, uh, like an emergency ambulance, you know, who is picking up some patient and you need to, uh, the only reachability you have is through 4G, 5G. Uh, it's an LTE and, and it needs the lower latency you have, the better when you're, um, you know, you need to do the hospital that you're driving towards. Uh-huh. Yeah. I, I think that, uh, one of the nice things that we have, one of the nice advantages is we can establish different types of connections and facilitate them. So there's two things I'm thinking of. First one is, uh, there'll be a large number of different types of, uh, systems that need to be interconnected. We can help facilitate those interconnections. And the second one is those connections, like I have to get the eight message through as fast as possible, uh, in order to save in order to save someone. There's also another use case where the speed of the message is not as important, but what does matter is that this message has very strong possibility of getting through under adversarial conditions. So those ones, it's okay to be slower. It's okay to bring in things that are, uh, that have higher error correction, recent, uh, properties. And so one really good example is, I think there was a, like in 1989, they had this, during California or the San Francisco Bay Area, and they had trouble getting messages saying like, you know, Frederick Couch is okay. He's, he's, he's at the friend's house or, or something like that. And then said they send it all off to Dallas. Actually went over the ham radio groups and, uh, they then created a database there that people could call into to ask, Hey, is this person okay? Okay. And so this ended up, uh, preventing people from flooding in, trying to find their loved ones because they knew that people were okay. And it was that one, the speed was not important, but it was important for that message to get through. Cool. Cool. Yeah. That's interesting. The point, the other interesting fact about, um, the emergency ambulance network, uh, you know, it's like a disaster is happening every day. No. And these guys are first responders who need to, I mean, I think highlighting a distributed edge that serves a everyday disasters, which is like a, uh, a bunch of ambulances are, are networked together. And, you know, they are able to, uh, as you, uh, as you point out, you know, it's not just about latency. It's about guaranteeing the delivery. So with the Cures components across from the edge to the core, to the final hospital network, you know, uh, I think that kind of a service also could be potentially highlighted. So, um, so what you're saying is essentially in the bank is, is the two side, both sides could be all mobile, right? Like the SD van side is also highlight here is SD van side is also completely mobile. Correct. Yeah. I would actually say that on the far left-hand side there, right? Instead of the enterprise devices, you would have, um, you know, like a Nokia, um, CPE there with a 5g antenna. And we show it making a connection to the ran, which was also dynamically provisioned. And then there would actually be another tunnel going through the radio access network to that PE router, right? Um, and then getting the internet or to the trusted edge compute, but, um, but yeah. Um, like the SD van would be its own workflow, right? And it would be all based on, um, you know, IP traffic, but, um, it would use the radio access network to tunnel into the core network that's on the right-hand side. Yeah. And it's, it's almost like Ramki each ambulance is a, is a branch by itself, you know, it's a mobile branch. Yeah. And as you guys know, you know, the first one hour after the accident, the relevant steps, the first responders take will decide whether that patient will live or die, you know, so that two-way communication, it's actually important as you guys probably already know, you know, so this is where the shameless innocent plug comes in, right? Cause even with orchestration, like if you had a service provider that had, um, network service managers running in their access network and in their connection, um, um, in all of this, right? Then even with orchestration, you have to define the variables every single time, right? Like you'll have some type of, you know, data model that like shows what the service looks like, but that service needs to be populated versus the network service model method is if these managers are out there, then as that edge site comes up the branch, right? Then as opposed to us, you know, having to make sure that the environment variables are set correctly so that this service model applies correctly, it's instead just going to come on to the network and request the overlay, right? And, um, it removes a lot of chance for human error. And, you know, to Fani's point that first hour is the most critical. And as opposed to spending the first 45 minutes configuring the network and getting it up and this and that, it should be plug it on, make sure it has an IP address so that it can hit the underlay. And then once it does, it makes that request and any of the available, you know, small cells or even like the main towers, you know, you know, you know, you know, if I say I have this capacity available, I can accept this request for a network service. And it's just done, right? Like, Yeah. It should be the, uh, you and I kind of a model which is dynamically asking for what service it needs. Absolutely. And it needs to happen in real time. Um, so I think the more we highlight such dynamic use cases, the containerized network functions, the more visible to the non-technical guys also, you know. Yeah, um, exactly. And I like Fani the way that the time criticality, right? Emergency like basically like every second, you know, or every millisecond matters, we can phrase it in any way, right? Every millisecond calls, right? Yeah. I specifically paid extra attention to that in my new watch Nokia, because, you know, I wanted to make more marketing mileage out of it. My suggestion internally at that time was that let's take one of the European cities and, you know, uh, give our SD van solution free of cost and demonstrate that and it'll be on the news all over. And, you know, so I think it better connects with every human, you know, uh, when you highlight, when you cut, when you relate it to a real world use case like that. Yeah. So, uh, in fact, uh, what I can do is I have the slides for this. This is the basically I can distribute to the team and maybe Fani also, you may have some looks like you may also have some interesting flights from Nokia. I mean, any shareable, I mean, again, I don't know from the past, right? Uh, there is anything, uh, which is publicly shareable. We can, you know, use that looks to be, I think, uh, brought on, right? So, you know, tying the story together. Yeah. The real world use cases are pretty public, right? It's, it's about how innovatively we map the technology we are developing towards that, right? Uh, and, and the extra value that we can add. Uh, no, what I meant was if there is anything like sort of, uh, uh, pictorially also, we can leverage. I mean, if there's something that we can leverage from the, uh, from your past work, uh, because it takes time to construct this pretty picture, right? A nice picture always helps us thinking out loud. I mean, if there is something shareable. Yeah. Yeah. No, nothing, nothing's secretive or proprietary to that. You know, I can draw up a few things. Yeah. Perfect. Awesome. Um, most of the use cases are actually coming from the carriers, right? Like what Jeff is explaining. It's not really from the vendor, right? So there's not, uh, they're not vendor specific. Yeah. No, perfect. No, no, no, make sense. Yeah. Yeah. We have heard this emergency use case also from several operators. No, this is, and, and I think it's a right one for the amount of dynamism needed is extreme. I mean, basically, you know, uh, super duper fast automation, right? Yeah. Yeah. One, one of the things that I'm really excited about is to take not only the fact that it's like nasty one style, but also the ability to dynamically pick and choose the communication technique and method dynamically based upon the needs. So we can, we can do things like with auto healing, uh, we could dynamically increase the resilience of the link, uh, if it fails too many times and make sure that that message gets through. Um, or we could also, uh, increase the overall speed of it. If we see that the connection is very resilient and we have statistics on error correction and see, Oh, there's fewer errors. Let's go ahead and reduce the error correction and get that message faster. But with a, it was still hitting the DSLA. So I'm really excited about, uh, this particular use case because it shows a very complex dynamic situation and that, and they distributed nature of this system will lens very well for, uh, for, uh, solving the, uh, problem at hand. So I know I think this is fantastic. The last little marketing bit too, right? Is we're trying to show like actual cloud native networking that we are not customizing the infrastructure for this use case, right? The whole concept is an immutable infrastructure with a common deployment model. So whether, you know, I'm in Alabama or I'm in, you know, Massachusetts disaster strikes, I go there and the network service is the network service. I put devices out there. It makes the request. Then I expect repeatable results every single time without having to change the infrastructure itself. Yeah, we are sounding more like marketing. It's also, it's also quite important. I think that picking the right use case. Absolutely. It was a positive joke. You know, we have to market. We've got to justify to our bosses to keep letting us. I know. Exactly. This is awesome. So fantastic in the top of the hour. So, um, if needed, I think, uh, we could also meet on demand. So I will send the slides. Uh, I mean me and frame created basically, uh, this is like a literally, uh, cut face and PowerPoint, right? Um, so basically I'll send, share the slides with the team, everybody. Um, so, um, uh, I mean, so we are meeting two weeks from now, right? If needed, I would be glad. And we should, I think meet earlier too. It's needed. Um, do you want to, uh, see we should meet this. Some other slot this week next week or, uh, decide offline. What do you get things? Yeah. Let's, let's first, um, just put some stuff on there. We can put, uh, Google comments in there and then just kind of let it spawn organically. I think. Perfect. So, um, we, we want to do it on slide. At least the Google doc is very, at least, um, I find it not very friendly for pictures. Uh, PowerPoint, I think seems the most friendly. At least, uh, um, from what I've seen at no other, uh, viewpoint literally here it is. Well, you've got this in a Google doc. They do have the Google presence. No, I, I, this is actually done in a PowerPoint and that case. So this is all coming from PowerPoint. The Google doc is very painful to actually directly do this. So get in a PowerPoint and actually just, it's a paste in the image, paste in the Google doc. That's how this is. Yeah. I'm open to whatever. I do got to run to my next meeting though, guys. So I will have the PowerPoint and we can kick it off from there. Awesome. Thank you. Thank you all. Thank you. Bye. Bye.