 Hi folks, we're just getting another minute as people join and then we'll get started. All right, welcome everyone to today's LF networking webinar. Today's topic is dynamic network slicing and open source projects. Just a couple of housekeeping items before I introduce our panelists and get us started. Attendings will be muted during the presentation. However, if you have questions, feel free to use the Q&A function. There's a window at the bottom right of your screen. You can click that at any time and go ahead and type a question and we'll all see it. We are going to leave time at the end for live Q&A. But if you do, like I said, if you have questions during the presentation, feel free to type them in at any time. Recording of this will be available the next few days. All attendees will get that link emailed. So if you have to miss anything, don't worry, you can catch up on it later. All right, so today we've got speaking with us today, Shrini Adapali with Intel and Tushar Doshi with robin.io, both LF networking member companies. And without further ado, Shrini, I'm going to kick things over to you to get us started today. Next slide. Next slide. Okay. My name is Shrini. So I'm going to talk about, you know, first three items here, especially what is happening in a network slicing market. And then going to provide high level system architecture, and then I'll try to map to best of my knowledge, the open source projects that can be used to realize network slicing. So one thing I want to kind of indicate to everybody is that, you know, this is all learning process for us also. So please feel free to comment and also ask questions. Okay, and Tushar is going to talk about, you know, what is required for productization. And finally, we are going to open it for Q&A. Next slide. Okay, this is kind of, this slide talks about, you know, one view of, you know, various 3GPP, 5G network functions across multiple types of edges, right. So just one example here. And we believe that, you know, the starting point for providers is just to build upon what is happening in 4G, right. It's provide cellular access to mobile phones, data access to smartphones, laptops and, you know, whatever, you know, the devices that are that need access to internet, right. And, you know, you have cell towers, you have near cell tower, it just, that's where many times, you know, DU goes in and some provider applications that help in, you know, help in DU functionality. You know, you have 4CVHs where CU, UPC, UCP might go, and the regional data centers where you would have a distributed UPF, SMF, AMF, and possibly PCF and core data centers, you know, trust-of-the-control plane, 5G functions in addition to centralized UPF. Since there are multiple CNFs that need to be deployed across multiple types of edges, right, you tend to have some kind of a network service and app orchestrator as shown in the above across all these blocks. And then on top of it, you would have traditional OSSPSS. So with that description of the picture, we believe that the starting point is that no slicing, right. So one slice or no slice, whichever you want to call it, is mainly meant for the EMBP traffic that requires typically very high throughput. Next slide. And with this, with one slice, the way it is today, starting point-wise, but at the same time, people would like to, you know, deploy applications, enterprises would like to deploy applications closer to UPF, you know, to provide best user experience to their customers, right. And that is already happening, right. We see some press releases from operators and hyperscalers, you know, hyperscalers providing these enterprise slices near to UPF, right. And we believe that eventually even the telcos themselves would provide these services. Okay, that requires, you know, traffic redirection kind of functionality. Basically, when the user is trying to access a service in the cloud, but if that service is locally available near UPF, then you like that traffic to be redirected to local enterprise application, right. So that functionality is required. And from the perspective of orchestration perspective, in addition to providing network service and application, orchestration of provider applications, you also require multi-tenant way of onboarding applications and deploying them in multiple regional data centers as shown here. So I think that's the next step, we believe, and it's already kind of happening, at least from the thinking perspective, it's already happening. Next slide. Yeah, when you have applications, right, the natural question is that to who is going to do security, because before when the applications are in cloud, the security is taken care by companies like Cloudflare maybe are some security functions that deployed in the cloud itself. But in now that applicants are deployed at the edge locations, the expectation is that the security functions also available in the edge locations. So that's one more opportunity which we see for operators to provide security slices for enterprise applications that are deployed in the edge locations. Whenever you have these security functions, the critical requirement is service function chaining, right. Essentially, operators should be able to, based on the customer's request, should be able to deploy security functions coming from different vendors, and then steer them in a chain fashion, you know, such a way that traffic to going to applications from UPF go through set of security functions. So this is another opportunity we believe. Next slide. And then in phase two, our belief is that, you know, there would be multiple slices, but they're all static or genetic slices. So we talked about the EMBB traffic which required high throughput, and the industry for auto kind of interest, you know, segments require some ultra reliability low latency communication. So, so there is a need for another slice over there. And in case of IOT devices, massive, massive number of IOT devices they require the MTC kind of a traffic profile. So we have a traffic profile which is low throughput, but small messages and a very high number of messages possibly. So that's where we believe that the next phase of slicing would be a simpler slicing that is fixed number of slices provided by providers. And in this case, what you really require is very simple slicing orchestration. That's what's shown on the top there. So let's see what is dynamic network slicing. So far we talked about, you know, one slice or no slice to three or four slices which are specific to different traffic profiles. But when we say dynamic slicing, we really mean, you know, a way for creating, you know, doing the life cycle management of slices on demand basis. And there are some evidence that, you know, some people might even like to see slice to be there for maybe hour or two. So it's really, really need to be on demand. Everything should be automated very well. And since there are multiple slices are going to be there, you do not want to dedicate the shared, you know, resources. You need to share resources and allocate resources on demand basis as and when slices are created. And the other important factor is that now that we are using shared infrastructure for multiple slices, and there would be a need to adjust, you know, resources based on the traffic that is happening there. And also based on the SLA that was agreed upon with that particular user slice user. So let's see who are the type of dynamic slice users. Next. Yeah. One is that enterprises that have, you know, highest security requirements. Basically, some people have data leaking concerns. If the same CNF is same CNF instance is shared among the slice users. There is always a concern that you know, there is a there could be some data leakage. There's one one kind of users. Next. Second kind of enterprises, you know, they require highest performance determinism, right today. CNFs, for example, may not be thinking about a way to provide very good performance isolation within a CNF instance. So either one may like to have different CNF instance for different slices to provide this performance determinism. Yeah, and some enterprises, right, especially maybe government organizations, they like to bring their own CNFs. Right. That's, don't know when that is going to happen, but that's something which we, which I keep hearing at least from few people that some people may some organization may like to bring their own, you know, security reach and as a kind of CNFs. Next. And of course, you know, mobile, virtual network operators. So they would take the services from MNOs and then provide, you know, services to the enterprises. So these are four kinds of users, the types of users we really require this dynamic network slicing. Next slide. So what does, how does it look right. I mean, in this is a little bit busy picture, but we talked about at the bottom of it, we have a genetic slices coming from very, you know, from provider itself. And then different organizations getting, you know, slices from provider. So in this case, we are showing organization one and organization N, they are getting three slices from provider. And then those organizations in turn provide services to the enterprises, enterprise app slices. Right. So in this complex environment, you really require the dynamic slicing orchestrator. And also you require service assurance to ensure that the SLAs of these slices are continue to be maintained in in the mini mini traffic scenarios. Okay. Next slide. Okay, so with all the, with all the evolution which we believe, right. So at this picture, what kind of showing what is the stack required to realize network slicing. So on the bottom of the picture you see, you know, telco edge clusters. And typically, these edge clusters would would be based on the Linux. And we know that covenants is becoming a choice for 5G in general. And we also know that covenants on top of it. And we also know that covenants, you know, by default, is not meant for telcos and CNFs, right, it's meant for applications. So that in the industry wise, there are any activity, many, many, many open source projects and initiatives that make covenants ready for telcos and edges so that many telco edge extensions. They're today. And then we believe that there are some slicing extensions required, more and beyond what is there today. Right. So the bottom, the bottom, that's what we are showing multiple edge locations at different location at different types. On the top, you know, use the network service orchestrator, it is meant to deploy today 5G CNFs and applications across multiple locations that you could basically define a network service with multiple CNFs. Then you could say that I want this CNF to be brought upon these type of locations and this CNF to be brought upon so many, so many applications at the intent. And thus, and then network service orchestrator will take care of deploying those CNFs of a network service across multiple locations and do connectivity required among them. Right, that is, that's already there. And on top we see that's that's where we are talking about dynamic slicing orchestrator in that blue color. Dynamic slicing orchestrator typically with the GBP defines right NSMF, which is a network slice management function, and then there are multiple other NSS MFs which we that is network slice subnet management functions, one for ran transport core, and we believe that you require even one NSS MF to open it is itself, and there will be a requirement for multiple NSS MFs to maintain the, you know, SLA and to maintain the QoS parameter QoS value. Okay, with this kind of a stat, let's see the functional expected. Next. Yeah, this is where DSO, which we believe it should take a slice LCM. And it, as I said, it has to be extensible to add more NSS MFs. And it needs to be multi tenant. Again, as I said, there would be multiple tenants getting different slices, and they may need to onboard the applications themselves so it has to be multi tenant anymore. Next. Then on the network slicing orchestrator, as I mentioned that many, a lot of functionality already available, but for slicing, you require some additional additional features we believe. One is that you know slice to cope on it is namespace automation. Right, and then in the, in the, in the translation of abstract SLS typically, you know, at the, if you look at a GST or any three GPP templates they talk about, you know, SLA with respect to latency through put and etc, etc. But what does it mean to actual resources that are needed to be dedicated. So that that translation is a complex one that is that is required in our view. Then other thing is that the security and performance isolations are very, very important for slicing to be successful. So we believe that there is a need for automating that aspect of it also. Okay, next. And the same thing on the on the on the edges, you require the security and, you know, performance isolation enhancements and confidential computing if somebody really wants their workload to be completely confidential from from even sophisticated attackers that have access to the edge location. And then local inferencing is very important if you want to have a dynamic SLA enforcement, you need to have inferencing also have to happen locally within each education. Next. The last one is service assurance. Yes. So, again, for SLA enforcement, you sometimes you need to have a predictive way of knowing stuff. And that's what we believe that MLDL framework is important over on top of existing service assurance platforms. Next. Okay, now that we talked about functionality, let's let's figure out what are the open source project that can, that can realize these things. Next. A dynamic slicing orchestrator own app as started, you know, work on network slicing a while ago. Almost now is one releases made and second releases are happening. So one app could be a good candidate over there, the network slicing part of one of the case. Next. From the network service orchestration perspective, we do have today M co openness M co that actually can deploy network functions, whether CNFs or BNFs across multiple edge locations, which are covenants based. And an own app has got some functions like CDS for two configuration. And it also has multiple components, external API, STC, so for defining, you know, for defining complex network services, based on the standards. And we also believe that the TSN switch automation is required, especially for your LLC kind of slices that time since the network is very, very pocket. I don't have any open source mapping there, but that's, that's the one important aspect, possibly own app is DNC could be a good candidate over there. On the platform side, you know, Linux foundation edge acryno defines multiple blueprints. And one of the blueprint is called ICN integrated cloud native platform, which is meant for covenants. And it, it, it has multiple telco edge extensions to make it really useful for telcos. Another cloud native observability tools that are available. And Istio on why is already there, but it may need to be enhanced for 5g core to implement the SCP functionality. So that's that's something which is, which is pending to be developed in my mind. Next. There is a student site, there are a lot of cloud native observability tools. The same 3dp for time series database elastic search for logs and traces storing spark and analytics zoo for AI ML training models, Minio for storing models in the graph on our Kiali and Kebana for visualization. And what, what is required is, you know, specific to slice, you know, there is some work going on in the own app and OSC communities to make, you know, slice away service assurance with non real time week. And also some, you know, whatever is required beyond the network that you know, as defined with 3dp. Okay, next, next slide. So let's see what is the status right in our mind. In a network slicing, what needs to be done in multiple projects, because it's a end to end big stack really, and efforts are started in multiple open source projects. And, and very good that the elephant one app did quite a bit of work, thanks to one of community, they did quite a bit of work on the slicing orchestration. Okay, but I believe that some key items need to be addressed in open source communities. One is security and performance isolation that they become very, very important for slicing. And end to end Qo is with the time series networking that is need to be taken care for the success of slicing in general. And SLA continuous enforcement is very key. And then ECM per slice provisioning this authentication where people might like to have their own SIM for that network slice, like so that ECM slice, you know, ECM provisioning becomes very, very important. I did not find at least myself in any, any, any open source project today, but that could be a good candidate. Another one is that cloud native 5G CNF configuration. This is for more than anything it is, it makes it simple for this orchestrator to automate the configuration of CNFs. And I said that, you know, putting together all these solutions, right, the multiple open source projects and of course, when, you know, when, in case of one app, when they are defining, when they are testing the network slicing, they do go with some, you know, simulated CNFs and simulated edges. That's good. But at the end of the day, there should be a, there is a need for, you know, combining multiple projects and showcasing end to end. You know, the things are working okay, and then it is, you know, has a good quality and optimize and secure it is important. I'm, I'm quite excited to know that, you know, there is the elephant started is something called 5G super appropriate. Okay, the intention is to exactly what I said, you know, showcasing end to end scenarios with including network slicing. So the link is given here. I'm sure you would be very interested in knowing about it and then. And an elephant would be happy to get your contributions and use it. Okay. With that, I will hand it over to Tushar. Okay, thanks, Srini. So I'm Tushar Doshi, and I'm part of the orchestration team at Robin.io. And some of you might know that Robin.io is a platform of choice for Rakutan 4G 5G deployment. And it includes the deployment from near-age to far-age to center data center. So today I'll be talking about learning that we had while building this overall ecosystem with the help of various open source projects including Kubernetes and the challenges. And you can extrapolate the challenges to when we are actually implementation of dynamic slicing on top of it. Next slide please. So these are the various challenges that we face. The first of it is desegregation. So we have seen various hardware vendors. So it includes Quanta or Dell or HPC. There could be various switches, routers, NIC cards, or FPGA devices, GPUs across various environments. So the platform which is used for deployment should be able to handle all these differences and the complexities provided by all these different types of devices. And it should give a consistent interface to the end user so that the applications getting onboarded should not see any differences when they are working for it. There could be multiple operating systems. So when you go to the far-age where you will have the VDU running and it would need the high-performance operating system and the platform of choice. There is the Red Hat at the kernel. And if you go to the central data center, then you will have your popular operating system like Red Hat Linux or CentOS. So and then you will have to handle the patching and upgrades and so on for these various environment. And anyways, the problem of scale also comes here, but your platform should be able to handle all these different challenges provided by different applications. And you will have the challenge of multiple heterogeneous sites. So there would be far-age locations for your DU, ER-age locations for CUM, FSM, etc. and the central data center for your ESS and OSS applications. And the platform has to behave differently when it's deployed, whether it's on the far-age or in the central data center. When you go to the far-age, the problem actually becomes of performance. So there you have to take as minimal resources as possible so that most of the resources are available for your actual RAN workloads. And then when you go to the central data center, the performance problem becomes less because you have resources that you can spare. But there the problem is with the scalability. With so many applications running with their different requirements, your platform needs to really scale while handling all these components. In central data center, you might get dedicated resources as well just for the platform, but the scalability is a challenge there. So the platform has to understand all these different flavors of your site deployments and need to handle it very well. The fourth point is most of these applications need new malware deployment. And this includes your new malware CPU, memory, FPGA, then any other GPUs and any other devices that the application might need. So one would say that it's being handled by Kubernetes today using topology manager, which has this fantastic CPU manager and other device plugins. But there are at least when we were looking at it, there are two things still missing. So one is there is no centralized scheduling for the NUMA aligned resource allocation in Kubernetes. So what happens is Kubernetes scheduler might decide a node where the pod might fit. But when actually pod gets scheduled, there might not be enough resources available in a single NUMA node as the application might require and the deployment might fail. And the second issue is there is no NUMA where memory alignment as of 1.20. There is that is a memory manager support is coming as alpha in 1.21. That's a great addition to what Kubernetes already is providing. But since we since the applications are ready to deploy, we have to find a solution for this memory, which is a centralized scheduling issues. Now the fifth point being the multi vendor CNF VNF applications. Now every you have to each CNF VNF is different and each come up with their own requirements. So you cannot have a single solution, which fits on. So some application might require see Mac VLAN CNI, whereas other might ask for obvious plugin and so on. And there would be requirement where they want multiple IPs on a single interface and so on. So that one should be able to handle it very well and it has to be very agile and provide support for what is needed when actually the development is happening. And even if we talk about 5G CNF, there are a lot of vendors still are using VNF and it will take time for shifting from VNF to CNF. So platform has to handle this very well and it has to should be able to handle the VNF and CNF in similar way and provide the similar functionality. So one of the example is say you have network policies in Kubernetes. Now if you if you apply those network policies in your cluster for CNF, similar network policies should also get applied to your VNF. So whatever support you add to container based workloads, you have to make sure that those are also available for the VM based workloads on Kubernetes as far as possible. Then with the multi vendors, there is another challenge of multi tenancy. The platform has 200 multi tenancy very well and here all the quotas and limits and limits come into picture. So say you have an application which auto scales depending on its load and another NF which also has auto scaling. So it might so happen that if you don't have limits that one application might grow more than what you wanted and another NF might not have any space to do auto scaling. So it is very important that you consider all this while designing a solution whether and this the same would be applicable even for dynamic network slicing where you have to you cannot do more than what is allocated to a particular NF. Otherwise it will just run over all your resources. The last point being the networking and the networking is is the one part where every vendor have its own view of things. So there are some common as there are common as per say multiple interfaces support where they will have some SRIOB interface or the Calico or the OBS interface. They will have different they will ask for different CNI support and and so on. And this complexity also occurs when we are talking about a distributed app deployment something like a ran where do you is running at bar age and see you is running somewhere in the middle. So you you have to handle that very well and when you go to the central data center also there are different requirements for each of these applications some applications require Kubernetes service to service communication. But some applications actually require part to part communication across different Kubernetes clusters so the platform should be able to provide those functionalities as needed by the by needed by the applications. And even if we call this as a network slicing it's not just about the network. So you have to at least the practical definition looks like that it means that slicing your compute storage as well as network so that you are predefined SLAs permit. Then why why does the storage come into picture all this right so there are some applications like CDN, which which need low latency but it also needs fast storage to achieve that low latency. So if you do not have a storage stack, which have those performance characteristics, your app will face issues, even when your network is blazingly fast so you will have to consider all of this while designing your network slicing solution. Next slide please. So that way, since there are all these network compute storage network, all of these are coming together. So the network slicing is more complex and it looks like it's more complex than what we have done as part of 5G deployment. And there could be more challenges because you have to consider the this this has to be dynamic and you will have to have a metrics and policy engine and all that achieve the actual dynamics. Next slide please. So here's giving a real life example for NF deployment. This is how it should generally look like the these parts, the NF parts that you see here, they can be part of the same application, or it can be part of different applications so with this model, if you consider the Farage you should be able to deploy different to use on the same physical host. So here you can see the blue CPUs in the middle that you see those are isolated non isolated CPUs which will be used for OS and Kubernetes and the platform, and yellow ones will be used for the application these are the isolated CPUs for the application. And if you see all the pods are getting CPU, memory, SRIOV devices, huge pages and so on from the same new manual. And when you go to new manual one the pods will be getting all the resources locally. Next slide please. And this, this is the ideal placement for your LLC application which cannot even take the additional latency introduced by cross NUMA communication. And this is, and this with this deployment you should be able to handle the scaling healing and so on. So some of the help chat some of the today with the multis and Kubernetes implementation. What happens is user will have to put the SRIOV interface names essentially are the device names in the request and limit section of the pod so when you do this, when say the node goes down or you have to move the resources across there may not be enough resources on the same interface name on the other node or it might happen that the interface naming is different. So with all these challenges the platform also should be able to handle the platform should able to handle that then things failover you should be dynamically adjust depending on the new nodes, new nodes availability what SRIOV devices are available there, what names are there and so on. So that is one of the challenges that we saw while deploying these solutions. So considering all these challenges that we have seen we concluded that the automation is really, really important piece of the, this whole ecosystem and it is also due to the fact that the scale of the 5G deployment is huge and we are talking about and then even the single cluster might have the nodes which are in three digits so considering that anything and everything in the environment should be automated even even a very small task. And this automation should include not just the NF deployment but it should include right from configuring the bare metal to NF management and the, and here the platform should augment what the open source technology is already whether it's Kubernetes, from Meteor, CSTO or any other open source technology and it should add value by integrating these various projects or adding various features which makes overall deployment usable for the user. With the experience that we have we should have the solution should be of a production quality and should be very agile because of the oncoming new requirement that are coming from different vendors. And this also is due to the fact that this 5G deployment overall is a new and there are some problems that we have never seen before due to introduction of Kubernetes and internalization in production and so on. And also if that is one problem there are so many clusters or deployments get impacted due to sheer scale that you will have to have a good quality solution because if you do not that is there is a huge impact when you actually hit some issue. And the next point is the platform should be extensible so that it can easily integrate with existing solutions out there including your BSS and OSS stack. And also everything in the platform should be API driven so that it becomes automatable. And that would mean that whatever you do can be automated by the platform or somebody else who is trying to automate the whole workflows. And it should work for all the common scenario so it should work it should the platform should behave same whether you are doing the RAN deployment on edge or CDN on Farage or CDN deployment at age or you're deploying OSS, BSS or some kind of a database applications in your central data center you should have a similar usability for the end user. And in this in these scenarios you will have to consider all kind of failure scenarios so your failure scenario should start from the it should consider each and every possibility that the disk could fail, your NIC card could fail, your physical host could fail, your rack could fail or your whole data center could go down. So you have to consider all this while designing the platform of your choice. And with all the building blocks in place it should be possible to do dynamic slicing which can be driven by policy engine and it will rely on various metrics coming from your infrastructure, your applications and so on. And this could come from different clusters, but first for implementing that you will have to have all your building blocks in place to reach there. Next slide please. So Robin has three solutions which try to solve these problems. So first solution being Robin MD cap. This is a workflow based automation system, which provides automation right from bare metal. Once the bare metal is racked and stacked whether it's in Farage near age or the central data center doesn't matter. It does the BIOS configuration, OS install, the Kubernetes install, and then it also gives you ability to deploy NF. So you can consider that if you're just wants to deploy around you just give a bare metal and say oh I want to deploy around this bare metal and MD cap will do end to end flow for you. And it provides management of physical elements that include bare metal or switch or routers or it manages infrastructure elements like say Kubernetes cluster. Then also it provides a service management like deploying network service across these different Kubernetes clusters it gives you a end to end view and a central pane of glass across your world deployment. Then second product is the Robin CNP. So it is CNCF certified Kubernetes platform and it has all the bills and the results required for running production level telco and enterprise applications. And the more focus is on the high availability and disaster recovery for these applications. And the platform comes with MULTUS, SRIOV, GPU topology manager from ATSDCO all as part of the bundle. And the last but not the least is the CNS. So it's purpose built for application aware cloud native storage. It provides all standard functionality that any CSI compliant storage stack provides which includes replication, encryption, snapshot, grow, and so on. But one differentiator is it gives you a ability to take complete snapshot of your application. So if you have a handshotted like just so what it will do is when you take a snapshot it just doesn't take a snapshot of your storage it also takes a snapshot of your Kubernetes objects. So if something wrong goes wrong in the cluster you can always restore from that backup. Or if you want to move your application from one Kubernetes cluster to other Kubernetes cluster, it will allow you to do so. So all these products can work independently, like MDCAF can orchestrate non Robin Kubernetes clusters or CNS can be deployed on any other Kubernetes to provide storage. But their goal is greater than some of its parts. So that way they interact well together. Next slide please. So, and this is what I had for my presentations. I think, I think you'll take over for these questions right. Yes, thank you both really appreciate the thorough overview there. So now we're going to move into the Q&A portion of the presentation. And thank you looks like we've got a good number of questions in the chat window so I'm going to go ahead and read some of these out and get us kicked off. So our first question came in earlier this morning is Kubernetes NSS MF equal to Kubernetes itself as NSS MF, or is it a separate NSS MF maintaining Kubernetes resources. I can take that. Yeah, when we say Kubernetes NSS MF, we really meant to what you said that I'm in the second part of your question right it is a separate NSS MF registered with NSMF. I don't know why we need this is that, you know, when the three GPP and they define the standards they assume, I mean, they don't, they did not, they assumed many scenarios including physical virtual and you know, containerized. So most of the specifications. The way I imagine that they assumed physical physical functions right, but since the CNS are being deployed on Kubernetes environments in the edge clusters on inches. It's also required to automate some configuration of Kubernetes on per slide spaces. For example, when you create a slice, you also want to automate maybe namespace creation among multiple edge clusters. You may also want to automate quota resource quota just to ensure that a given slice doesn't exceed some resources, you know, many more so we believe that that KS NSS MF is would be taking the role of automating all the stuff which I mentioned. Great, thank you for that. Next question, curious about the underlay I assume these tenancy separations could be maintained at the lower network devices if these are dynamic, are these handheld through VRFs etc. Next question, right. Again, as I was answering the previous question right it is very important to ensure that you maintain security and performance isolation. In case of Kubernetes. As you may know, when we create a pod, it actually creates a network namespace, which is nothing but the VRF from the net and networking perspective is VRF. So it does that already, but over on top of it, since you could have multiple parts for a given slice, you also require another construct which already is available by Kubernetes called namespaces. So, those namespaces can be configured with the quotas and things of that nature. So, yeah, absolutely. So, it's underlay, underlay configuration for resource management across multiple sizes is important, and Kubernetes is one of the underlays as per the slices consent, and we should leverage as much as possible, what Kubernetes offers to provide this kind of isolation. Wonderful. Thank you. Next question. Could you please talk about the parameters we have to pass from the orchestrator to core ran and transport for slicing. Yeah, and 3GPP defined all these things already, but predominantly, the things to be passed is that, you know, in case of core and ran, right, what network service to be brought up, right. I mean, the expectation is that the administrator or user would have already onboarded all the network slice specific CNFs or apps onto these domain orchestrators. And then, when the slices created by expectation is that the appropriate NSMF would invoke your orchestrator with the right network service that needs to be brought up. And in addition, it also needs to indicate, you know, you know, it also needs to indicate, you know, what network slice it is. Because at the end of the day, there is a function called NSMF, right, network slicing. Sorry, NSSF network slicing select selection function. So that needs to be configured. So that so that the information is your network service to be brought up. And what network slice ID to be configured in the NSSF and NSSF and some slice specific SLA parameters to enforce. So these are the typical parameters you need to pass from NSSMFs to orchestrators. Great. Thanks, Shrini. Here's another one for you. Where can we follow the progress of using Istio with 5GC? Two parts to two questions I guess in this one is, you know, the one is the deployment of 5G core functions, control plane functions at least with Istio service mesh. And I'm, you know, there is one, there's one demo was given by an LFN O&ES last year by a company called Lartner Networks. They used to free 5GC to deploy, you know, as a 5GC example. They haven't used Istio at the time, but my understanding is that they intend to use Istio. So hopefully this 5G super blueprint, which we are talking about, that would consider Istio for security and traffic management among the 5G control plane functions. And having said that, I also want to say that, you know, there is, I mean, Istio, you know, the traffic management it has kind of a conflicts with the traffic management that is required by 3GPP. 3GPP today defines, you know, its own mechanism of load balancing, figuring out the destination micro service instances based on NRF and its own 3GPP specific criteria. Whereas Istio has its own criteria to balance, you know, connections coming from client to CNF stores, server CNFs, like prominently round robin or weighted round robin kind of mechanisms. It's purely traffic based, but in case of 3GPP, it requires more, more parameters. So there is some work item, at least from Intel perspective, we are working on a service mesh, we are not working on it yet. We are thinking of enhancing on why sidecar and Istio to implement 3GPP SCP functionality. So I don't have the status or when we are going to start, but I'm, if you send an email and I'll be happy to update, keep up, keep up updating your guys on the status. Great, appreciate that. Here's a question for Tushar. Now OpenStack is being run on Kubernetes. Will this OpenStack use the cloud native storage instead of having to build stuff? Yeah, so I'm not sure why the progress of this project, but this looks like a orchestrator on top of orchestrator, right? So if the real use case here is running VNFs on Kubernetes, there are very nice projects which are Qvert and Worklet, which take care of deploying your VMs on top of Kubernetes. And yes, it does use cloud native storage instead of having to build stuff. So I'm not sure if the ask is for OpenStack on Kubernetes. Yes, it could very well use cloud native storage for the VMs rather than having to build stuff. Great, thank you. We have a question here about EdgeStack. So which EdgeStack projects, StarlingX, Acreno, Quart, et cetera, are more sustainable and easy to use network slicing for smart cities? Yeah, I guess it's very difficult to answer that question. So certainly, even in Acreno, right, there are many, many blueprints. So it is up to, I mean, the reason why there are so many blueprints are that there are so many use cases. You know, different use cases may require different capabilities and different functionalities. That's the reason you have so many blueprints in Acreno itself. In the smart cities perspective, I mean, I don't know about other blueprints, frankly, but the blueprint which we are at Intel and a few more community members are working on is called IC and blueprint family, integrated cloud native blueprint family. One of the smart cities, one of the smart city application is actually deployed using that blueprint. So I can only say up to that, but I'm sure there are many other blueprints and many projects are equally applicable and suitable for smart city applications. Okay, great. Yeah, that was a little bit of a tricky question. Here's another one for Tushar. Is Shri Av ready for prime time? Yeah, at least in the deployment that we have the Shri Av ready, Shri Av was the integral part of almost every real telco application, right, and this included whether it's a CDN application or if it's a DU, RU and other applications, but yeah, I think it's something that every render is using so and it is running in production without any issues. So I assume yeah, it is ready for prime time. Great. Here's another one for you Tushar. This is a follow up to an earlier question. In Kubernetes, the POD network is flat across namespaces. How do we ensure that the application traffic between apps that belong to different namescapes are isolated? That's a good question. So the answer is yes and no. So I mean it's not an objective question, but still. So it is possible to isolate the traffic and Kubernetes network policies help here where you can define a network policy which could essentially isolate a single POD or a particular namespace or even you can have a group of namespaces and carve out as a tenor namespaces and have them isolated together or it could very well be a level based as well. Even Istio provides you some kind of isolation there, but all of these things that only work and work on Calico as the primary interfaces. Now when you are talking about SRIOB based POD, you have the challenges where these Kubernetes rules don't apply because Kubernetes only supports a single interface and all the things that work on which will work only on the Calico based networks and the traffic and this is one of the challenges that still needs to be solved as far as I know and it's still being worked on. Yeah, just to add to that, there are, I mean, as Tushar mentioned, we have multiple network interfaces for the PODs and they're absolutely right. The networking is common across the PODs. At least from the primary network interfaces, people use Calico or any other things, Kubernetes already defined a concept called network policies. So you could ensure that you create policies such a way that, you know, across the namespaces, for example, as you asked, you could have policies across namespaces or even across PODs within a namespace. That's one facility that is available. But the question may be that, you know, in CNF, you could have secondary interfaces and secondary networks and who would take care of security among them. And that's where, that's where we believe that, you know, as Tushar mentioned, that's a going on at this time. That's kind of a work in progress. But in a Crino ICN project, as I mentioned, we have a secondary CNI called Woven for NFE. And using that Woven for NFE, a similar network policy concept that is provided for primary networks is available for secondary networks also. So you may want to check that out. Yeah. Thank you for the thorough response there. We now have a question coming in from Central America currently for a telco or CSP company. How would 4G LTE be integrated with dynamic slicing, dynamic slicing, would white box switches and bare metal services be used? Yeah, I don't know how to answer that question. But, you know, if you're looking at large number of slices, right, that dedicating the servers for physical resources for each slice is going to be a challenge, right? It's from the cost perspective and the manageability perspective and automation perspective. I still believe that whenever we talk about dynamic slicing, I'm assuming that it is on demand slicing. It's good to go with some kind of a virtualization platform or continuation of platform like Kubernetes on top of bare metal servers. That's my view. Great. Thank you. So we just have time for one more question. And before we jump into that, I just wanted to remind folks that yes, a recording of this webinar will be available in the next day or so. We'll be emailing that out to all of the attendees and it will be posted on the LF networking website. Okay, so last question is about a simulator. Do we have a simulator to test topologies with Kubernetes or containers? So if it's a question for me, I am not sure what this simulator is about. So Trini, is it along with the previous question that currently, how would 4G LTV integrated with dynamic slicing? Yeah, so certainly as in the dynamic slicing perspective, right, you know, that the community in OneApp does have a simulator, the CNFs, you know, because it's for testing purposes, right? So those are part of OneApp itself today. But in general, in general simulators are very common, right? For example, in a free 5G demo that was given in the last two or years, for example, they are using GNodeB simulators. So I don't know how to answer the question, but I guess on per project basis you might be finding simulators for testing purposes. Okay, that's all. Okay, great. Thank you. Well, we are at time. So I appreciate everyone's participation today and we'll see you on an upcoming webinar. Thank you so much. Yeah, thank you all. Thanks everyone. Bye.