 Okay. Welcome all to this session on end-to-end 5G's Netflix slicing in Onap. Myself Seshu Kumar from Huawei and I have my friend Swami Nathan from Vipro who is also participating in this. So together we'll try to go through the journey of what we have tried to do from the Netflix slicing part in Onap. So long story short, basically 5G slicing is as we all know is a very critical feature of the 5G arena. And it actually helps us in different forms and shapes. So what we have done in Onap is that we have tried to split this into multiple phases and we started the work somewhere in the beginning of this year as a part of our Frankfurt release, the previous release of Onap. And we have tried to split this requirement into multiple agile mode where we have taken multiple iterations of it and tried to solve one by one. So again, I'll not go into the advantages of 5G slicing, but in short, I can say that the key focus of us will be to actually align ourselves to multiple SDOs. That includes the MEF, the 3GPP, the other IDF and other SDOs which are there. And we also want to put our standardization with respect to the open APIs which we are talking about here to all the systems. So we also want to take a brownfield and a greenfield scenarios. We will talk about it in coming slides where we want to showcase the existing entities as to the play of existing as well as new networks to be used using Onap. So having said that, if I have to talk about how we have tried to take it up as a part of the complete evolution is that we could come up with, when we did brainstorming, we could come up with five different scenarios in which Onap could be used in the entire 5G slicing orchestration part of it, management functions of part of it. So typically here, what we are trying to show is the core of it is actually the SO part. So the first part of it is where we have the CSMF, NSMF and NSMF coming as a part of Onap itself where everything is within Onap, all the management of the slices happening within Onap. The second scenario is where we have the OSSVSS and the CSMF which is coming from the user or the operator, but NSMF and NSMF is actually implemented using Onap. The third scenario is very restrictive where the core part of it, the manual part of the Onap is being leveraged to actually have the 5G slicing management. And the southbound, the federated orchestrator or the northbound of it is actually done using the external entity from Onap perspective. The fourth, which is again a typical scenario where the core, the transport and the run part of the slice management is external to Onap itself. But the core, again, I'm saying the central part of the orchestration of that, the CSMF and NSMF part of it is done by using Onap. So and the third, fifth part, which is not that typically used, but still we wanted to keep it a holistic picture of the company scenario here. The NSMF part is only done in Onap, but remaining to the CSMF and NSMF part is implemented using the operator way or the external to Onap. So what we have done in this entire scenario is that we found that the scenario four is more closer to start with. Here too, what we have done is we have taken the core NSMF part as a starting point for Frankfurt release, that's a previous release. And we have implemented the CSMF NSMF part of the entities within the Onap scenario using the 3GPP standards. We were referring to the 3GPP standard specs. And the core NSMF alone has been done as a part of Frankfurt. So the Guilin release, which is coming up as a release for the in 2020 November, which is next in the next month. We have enhanced it further. We have leveraged it further where you're also taking care of the core, the transport and the run part of the slice management NSMF part is within the Onap itself, along with the CSMF and NSMF. Having said this, I want to stress on the point that we still have the backward compatibility where we will be able to actually have interaction with the external entities of all these NSMF still available. So the backward compatibility is also something we don't want to lose while doing this. So these are the typical flows. I mean, if I could split this entire 5G slicing into stages, then I can split them into three different stages. The first is the preparation part, which is the day zero. The preparation includes the design, the onboard and the distributed part of the templates. The templates include all those NSTs and other day zero configurations, which are required for us to instantiate the slice management within your operators environment. So the second is actually the instantiation part, which includes a creation activation and deactivation termination based on need. So basically here, what we are talking about is the instantiation part of it, the creation of the slice management itself. And then the third part of it is actually the closed loop, which once the instance is running and deployed on the system, on the environment, then you should be able to do the reporting, the telemetry part of it, the monitoring part of it, all that. Also, we want to actually have KPM monitoring of it in an intelligent way. We want to have an AML based event, AML based entity or approach for this to be proactive than reactive mode of it. So this is something which we are working on. So in short, I can say the red ones are the ones we should have implemented as a part of Frankfurt. The blue is what is being in progress and the green is what is the work which is for future. So the KPM monitoring closed loop, control loop and intelligent analytic based, the slicing is actually what we will have for future. So, if I have to put the same three different modes in the on-app scenario, then these are the, I mean, I would not say this is exhaustive on-app architecture, but we try to take the core components of on-app and see which components will be more or less participating in the which part of the three stages which is just discussed about. So the stage one is the day zero, the preparation stage where we have the on-app SDC as a core component, which will be doing the design part of the service. Here is where we actually have the BNF, the PNF and the CNF, the physical network function, the virtual network function, and then the content network function. All three is what we want to have as a part of our service design, a heterogeneous service which actually includes all three and different combinations. And that is what will be designed and then it will be the onboarding of it, but for example, for CNF we require to onboard the Helm charts. So such things are actually done here and then it will be distributed to different components of on-app. Other things which are also taking the construction of the policy thing, the CDS, the CBA, which is the control design studio blueprint, which is required for CDS to understand. All those things are also done as a day zero or the preparation part of it and then distributed to different components of the on-app. So the day one is the allocation part or instantiation part where we typically have SO is a core component which works there. So that's the brain of this entire logic here which interacts with other components, that's the OOF, the A&I, the SDNC, CDS and all, where it interacts with this component based on need. So SO is the one which actually runs all those three, the CSMF, the NSMF and the NSMF part of all the core, the CNN and the PN. And then it actually interacts with A&I for inventory part. While answering inventory, A&I is also both the available as well as active part. So it actually takes the topology information which is updated by discovery or auto-discovery mode. And then it creates the instance and it actually persists the instance in this module of on-app. So OOF is actually helpful for the homing and placement. SDNC CDS are the controller layers which actually help us in interaction with the software entities of this network. So this is how I can say the three stages of the operations are done using on-app. The day two part of it actually is where we will have a DCI as a core component, which is the designer controller data collection and analytic engine, which actually helps in the data collection as well as the analytics part of the data collection. And it actually takes the information which has been persisted by SO in the A&I, and then it actually works on them for the telemetric part and then the analytic part later on in the point of things. So that's actually what we have to do in future. So now we'll talk about the core NSF part. I'll cover the core NSF part and then hand over to my friend Swami for the transport run in the roadmap. So what we have done, as I said, in Frankfurt, we have started the journey of the slice management and core was taken for PNFs. But in Guilin, we have taken a step later. We have done a leap of it. There was one more requirement in on-app, which is actually CNFO, the container network function orchestration, which has started in parallel. So we wanted to club these two and then showcase that a heterogeneous service which actually has all the three types of resources, the CNFs, the PNFs and the VNFs, as I said before. And the core part is actually what we want to be leveraged or showcase using the CNF part. So these are the components. Again, I don't want to go into detail of it. It's a different requirement altogether, but I just wanted to highlight as to what are the entities which have been modified for Guilin from the CNFO perspective. So that is what's here. So this maybe we'll go in as quick as possible, but I'll try to be explained in the flow here. So if I have to talk about the day zero again here, in this entire steps of which you can show the 12 steps if I have to divide into the entities, then day zero will include the zero to three where the user will be configuring the KDIS cluster. This is typically for a CNFO, I mean a CNF orchestration part of it, where the KDIS cluster is configured by the user and he'll be attaching the cluster information to the own app. So once he attaches the cluster information, he'll be onboarding the information to A&DI and also to the KDIS plugin which interacts with this cluster. At the same time, what user does is he'll be onboarding the Helm chart to the SDC. The SDC does distribution of that to the different components. The three major components of distribution for the CNFO work are the SO, the CDS, and then the multi-cloud. SO takes the blueprint for the service and it decomposes different resources. It's a model-driven way in which it actually understands this resource is a KDIS part or the Helm chart and then it goes to the KDIS plugin. For the PNF and CNF, it goes to SDNC and further known to other operations. So if it's a PNF, it goes to OpenStack, otherwise it goes to the KDIS plugin. The CDS actually is the one which takes the CBA and it actually helps us to the enrichment of the KDIS plugin later on. And distribution of Helm itself to KDIS plugin will be used. This KDIS chart will be understood by KDIS plugin and it will be used to instantiate the resource on the KDIS part. So in short, I can say this is a resource orchestrator and this works as a service orchestrator for the CNF space. So in due of time, I want to keep it short here. The same flow as I explained before is actually what we transformed to a CNF orchestration. The CSMF and NSMF functionalists are done by SO. The NSMF part is actually what this is a typical stage one which I was explaining before the scenario one which we brainstormed about. So that's where the core NSMF comes into picture. And that takes the workflow which will be then published with CDS and multi-cloud and finally goes to KDIS cluster. So the same is what has been here. I'll keep, I mean, I'll keep these slides for, I'll invest sharing the slides. So the same has been explained in further detail in these slides. So with that, I want to hand over to my friend Swami who will be talking about the RAN and NSMF and transport and then the roadmap of it. Over to you, Ranshwamy. Yeah, thank you very much, Seshu. So we'll spend the next few minutes just trying to go over briefly the RAN and the transport NSMF functionality as well as what lies ahead beyond the Guilin release. So if you look at the RAN slicing, right? So if you look at for a moment, what constitutes a typical 5G RAN? We know that in the context of 5G, right? The RAN itself can be kind of decentralized in the sense that the RUCUs and DUs, they need not be co-located and they can be physically separate and they will be connected through the front hall and the mid hall. So the RUs and DUs are going to be connected in the front hall using the front hall and DUs and DUs using the mid hall. So from a slicing perspective, right? If we consider this desegregated RAN, there are two deployment options based on what we have heard from the community members as well as the service providers who are considering the deployment of network slicing in their networks. So in the first option, the RAN network slice subnet management function, right? The RAN and NSMF is responsible for the RAN as a whole. So when we say RAN as a whole, it means that it encompasses the RAN network functions that is the RUCUDU as well as the transport connectivity that is the front hall and the mid hall. Obviously for the transport connectivity part of it, it is going to invoke the transport network NSSMF for getting a slice instance in the front hall and the mid hall. This is the first scenario. In the second scenario, the RAN network slice subnet management function is responsible only for the RAN network functions that is the RUCUDU whereas for the transport connectivity, right? That is the front hall and mid hall. The NSMF directly interacts with the transport network NSSMF to allocate the front hall and mid hall slices. So these are the two options and our implementation approach tries to support both options. Okay, and I want to focus on one other important point here on this particular picture. That is, we want to also be able to support the connectivity from an NSMF within ONAP to also a RAN NSSMF that is outside ONAP. So when we talk about this, it is quite important that we need to have a set of standard interfaces between the NSMF and NSSMF. So between the RAN NSSMF and the core NSSMF and towards the NSMF, right? We want the interfaces to be aligned with the 3GPP APIs whereas for the NSMF to the TN NSSMF, right? And eventually also for the case of option one, the RAN NSSMF and the TN NSSMF, it shall be aligned with the TSCI that is the transport slice connectivity interface, which is being specified by IETF because 3GPP doesn't talk so much about the transport network connectivity as well as the transport network slicing as such. So maybe to the next slide. Yeah. So if you look at the RAN slicing itself, right? If I have to give you a brief view. So we are targeting in that wheel in release, which is currently ongoing at a brownfield scenario wherein we assume that the RAN network functions, whether it is PNFs or VNFs, they are already instantiated or onboarded and all the initial configuration is already ready. They are up and running. So what remains when you want to instantiate a new strand slice subnet instance, right? So we have to just determine what resources need to be allocated for this particular slice subnet instance and then perform the necessary configuration or the reconfiguration. Similarly, for the closed loop also, we will be performing the necessary reconfiguration of the resources in the RAN network functions. This is in short that we want to target for the wheel in release and with respect to RAN slice inventory, obviously we assume that certain things are preloaded or preconfigured. And this will be loaded into what we call as the config DB in the wheel in release. This configuration DB will eventually be moving the data, the information here will be moving into the configuration and a persistency service, which is being proposed as a separate component with a known app in the future releases going forward. And we also want to align to the standard interfaces here. I will not spend much time here on this slide just to give you a view that the NSS-MF workflows and the core logic is going to lie within the service orchestrator. And obviously it is going to get the help from components such as OOF for determining the RAN slice subnet instance and the RAN resources. An inventory anyway is going to be in ANDAI and like I said, right, the RAN related configuration details are going to be in the config DB. And for the closed loop, it's going to rely on some microservices within the DCI. And with respect to the transport network slicing, so there are a couple of important points that I want to highlight here. One is the interface between the TN NSS-MF and towards the NSS-MF. So here it's going to be based on the TSEI interface, though we are only starting with some work in the wheel in release. This will continue beyond the wheel in release as well. And the information models also are going to be aligned based on the TSEI. So that is one aspect. And then the second aspect is on the southbound of the NSS-MF, TN NSS-MF, we want to support the generic resource APIs. And the intention here is that we should be able to interact or interoperate with any of the domain controllers, whether it is IP or optical, whether the domain controllers are within ONAP or outside ONAP. So maybe to the next slide's issue. And from an architectural standpoint, as far as the transport network NSS-MF is concerned, we want to support all the different deployment options that is where TN NSS-MF is integrated and is part of ONAP along with NSS-MF. Or if the TN NSS-MF can be outside ONAP, where in the NSS-MF is within ONAP, it will interact with an external TN NSS-MF. And the TN NSS-MF should be able to interact with the RAN NSS-MF, which is inside ONAP or outside ONAP. So all these different, I mean, whether it is like option one or option two, as we discussed before. Now, all of this is possible only when we have a standard based interface. And that's where, again, like I earlier said, it's going to be, we want it to be based on the TSEI model. So just giving you a glimpse of what lies ahead in the future releases. Obviously, this is not a very exhaustive list. This is just trying to summarize some of the high level items, which are kind of our priorities based on the inputs that we have gathered from the community members, especially those who are involved actively in this use case, and as well as from some of the service providers. So if I have to just list in a nutshell, right, without reading everything that is there on the slide. So from a slice and a slice subnet lifecycle management, obviously the PMData collection and then the closed loop actions, right, that will be resulting from it are of the highest importance. And this closed loop actions, a step further would be to employ the use of, I mean use AML for the closed loop scenarios. And with respect to the ran slice orchestration, we want to also support CUD use as not just VNFs, but also as CNFs, and then their interaction with the PNFs, and as well the support of standard interfaces. With respect to the core slice subnet orchestration, we want to support certain core network functions being shared across core NSSIs. Then the chaining of core CNFs, as well as the interaction with some of the core network functions, which are relevant from an orchestration standpoint, like the network slice selection function and the analytics function, right, the NWDAF. And from a transport slice orchestration perspective, we want to support the TSEI interface in its full-fledged form, and as well, multi-point to multi-point connectivity first between the ran and the core, and then within the ran also, that is the front hall in the mid-hall, and we want to support truly multi-domain transport network slice of net orchestration. This is just a glimpse of, as I said, right, so this is not exhaustive and definitely we would welcome any inputs or comments or feedback that you might have, so that then we can also pick them whatever is possible as we move forward in this journey, because this is going to be a multi-release effort given the size and the complexity, so your inputs are definitely welcome. So with this, we stop here and we would be happy to take any questions or suggestions or feedback that you might have. Just to add to what Swami said, we are looking for a helping hand to come from the community. We want to take it as a collaborative effort throughout the telecost industry. So anyone who wants to actually, who are interested to join this task force or this effort could actually reach any of us, and we'll be welcome to take it forward. Thank you. Thank you very much.