 Hello everyone, welcome to this talk on AML and data analytics in the context of non real time Rick and the NW daff. And my name is Sandeep and I work for our networks. I'm part of the engineering team in the networks. My name is Bono and I work for our networks and part of engineering team agenda for today is that we are going to give an introduction of the NW daff and and then we are going to talk about analytics platform architecture, which is a unified architecture which can implement both the NW daff and the NW daff use case. And followed by that we have a demo, and we are going to demonstrate the NF load use case in the NW daff. NF is the analytics function in the 5G core. And as the name says NW daff provides analytics information to other network functions, OAM and the application function. And W daff is by definition a network function, and it has to register itself with the NRF, like other network functions. For registration it has to provide the analytics ID, which is then later used by other network functions in order to discover the required NW daff function. The kind of information that NW daff provides is the statistics and the prediction, and it supports both the subscription model and one time request model. And the data that the NW daff consumes in order to derive this analytics comes from the application functions 5G core network functions OAM. And it collects all sorts of data infrastructure related data you related data. And then, and that is what basically it uses in order to derive the statistics and the analytics. Non-artic is more of a platform and non-artic is a platform which provides the analytics for the RAN functions. And non-artic is a is an extensible platform, and we can add more functionality to this platform by the means of the R apps. These R apps are analytics applications, which are backed by the ML models. The R apps provide the policies and enrichment information to the components running in the real time rig over the A1 interface. What we see on the screen is high level block diagram of the non-artic in the SMO framework. And there are bunch of components in the non-artic which manage the R apps, manage the lifecycle of the R apps and the model inferencing in the R apps. The non-artic also has components which manage the A1 policy. And these are the components which get the latest and greatest policies from the R apps and then apply them to the real time rig over the A1 interface. Now with the introduction of NWDAF and non-artic, we are going to look upon the commonalities between the NWDAF and non-artic. Common requirements rather. So I will not go over the differences because they are obvious. But what it takes to implement the non-artic and NWDAF is what we need is MLOps kind of platform, which will be used for training the ML models. And these ML models comprises the NWDAF and the R apps in the non-artic platform. So we have come up with a unified architecture wherein we implement both the NWDAF and the non-real time rig. And this slide shows the on-app components that our architecture is leveraging. So to give some names, so we have the collectors, different sorts of collectors from the on-app DCIE project. We have SDNR, which is the SMO and which implements the non-real time rig, the A1 policy manager and the R apps manager. The DMAP bus is the Kafka bus, which is again another on-app project. And then we have Acumos. Acumos is used as the ML catalog and this hosts the machine learning models which are trained in the external MLOps platform. And then we plan to interface with the Acumos by creating a model-influencing layer, which is going to be a microservice in our platform. And this microservice is going to interact with the Acumos, get the microservices which consist of these ML models and deploy these microservices across the target clusters as NWDAF in the NWDAF use case and as R apps in the non-real time rig use case. This slide shows the implementation of the NWDAF. On the top layer, we see the MLOps platform and the Acumos. MLOps platforms provisions the model designers to train the ML models and it also has the functionality of retraining the ML models, running the data pipelines for data transformation. And the source of data is data for this MLOps platform is this data lake, which has the data from all the 5G core network functions, OEM, and the infrastructure-related data which we plan to collect through the Prometheus servers. The box on the left is AMCOP. AMCOP is ARNA's multi-cluster orchestration platform. So AMCOP has the DCI components, which are mainly the DCI collectors, and we plan to implement this model-influencing microservice, which is going to interface with the Acumos platform and it is going to consume the ML model microservices, which are there in the ML catalog hosted by Acumos. And it will perform the model-influencing in the NWDAF in the target clusters. On the right-hand side, we see H cluster, where we see the 5G core network functions deployed. So these network functions and the Prometheus services are deployed and managed by the AMCOP platform. The data collectors in the AMCOP platform can subscribe for the NF-related data through the NEF. And then this data is then uploaded into the data lake, which is then consumed by the machine learning workflows in the MLops platform for the purpose of training the ML models. And Prometheus servers which are deployed in these target clusters are configured to scrape the container level and node level metrics. And Prometheus, and for the Prometheus, the remote right endpoint is again this data lake. And this data is used by CPU prediction, memory prediction, like ML models. Very similar to the previous diagram. This slide shows the implementation of Naun Artirik. And what we have here, the differences here are... So the model inferencing service now is not deploying the ML models in the NWDAFs. It is rather deploying the ML models in the R-Apps of the non-real-time rig. And the DCI collectors are... The WES collectors, file collectors are collecting the RAN-related data over the O1 interface, which is uploaded to the data lake and later consumed for the machine learning purposes. Similar to the previous slide, there is Prometheus for collecting the infrastructure-related data. And all this data is used by the MLops platform. And the trained models are then sent to the catalog in the Akumos, which is then consumed by the model inferencing microservice. And this is how the R-Apps are deployed and managed. So now with this, we can move on to the next section, which is about the demo of the NF-Load use case in the NWDAF. Yeah, hi everyone. Now we will see the NWDAF or NF-Load use case demo. So prior to that, I just wanted to give some information on NF-Load use case. NF-Load use cases in the NWDAF supports multiple services out of that one-off piece in the NF-Load use case. So NF-Load use case, it collects the data from OAN and NRA. Mainly these are the two data sources for this. And the kind of output analytics it provides is the status of the NFs and the resources of the status of CPU, memory, and storage, and the load path. And it provides analytics in the form of two. One is the historical data in the form of statistics and also the predictions. That's about the NF-Load use case. Let's see our implementation. So these are the, on a high level, if you see these are three models which are visible on this NF-Load use case. Primarily, this is the blue box is our NWDAF implementation. And apart from that, it has the NRF. NRF is one of the physical network function, network positive function, and AF. So this AF is application function. In this use case, this acts as a consumer of the NWDAF. So let's start from NRF first. So NRF is, as I mentioned, NRF is one of the physical network positive function. All the network functions between the physical will make an entry here so that if any network function want to do a lookup, then it will call NRF. In this demo, so we have used a free 5G NRF. That is the implementation we have used and application function. So application function is basically like a consumer of the NWDAF in this case. So it gets the prediction output and it simply logs it. But in real world scenario, these consumers are like gets the data and it takes the various actions like infrastructure scaling and feeding this information to some other third party applications. So that is AF. So in this AF, basically, we simply acts as a consumer. We are not doing any closed loop scenarios here. It simply logs it. And this is implemented. It acts as a client. And coming to the NWDAF part. So we have developed as a part of the POC. So the thing is, NWDAF has various building blocks out of which we have implemented three layers in this case. The first layer is the service API. Service API in NWDAF as per the 3GPP standards, it provides the two kinds of APIs. One is the event subscription, where others will subscribe to the event and get the NWDAF notified based on the subscriptions. And the other one is the analytics info, which is like request responses. So we have implemented analytics info and we will see the same in our demo as well. And the next part is the analytics layer. So here we have implemented one of the CPU prediction models. So basically this gives the CPU predictions for the given inputs. So we at least train based on the offline data. So we don't have any real time data. So that's what you see here, data collectors are Android. Ideally, it has to subscribe to the various network functions. But so we have collected the data and the data is available offline here. So this data is being used for the training and also the service API is used to get the predictions of this. So with this, I will just explain the flow and after that I will show the demo. So the first part is, as in the NWDAF concept, it registers itself to the NRF. In the registration, it gives the endpoint of the NWDAF. So that is called as enough register. This is a 3GPP specific API. And once it's registered, after that it starts the request. This is the NWDAF part. So once it's registered, so AF, which is a consumer comes into the picture and it asks NRF, so it makes a call of end of discovery and asks for all the NWDAFs which are registered with this NRF. So NRF will give the response, the response, it gives endpoint of the NWDAF. The NWDAF endpoint, it prepares a analytics info request and it calls the NWDAF. Upon receiving the request, NWDAF, while looking at the data points, it calls the CPE prediction model and it gets the CPE prediction output from this analytics layer. And the service info, analytics info API, it creates the response in the form of 3GPP standards for the given NRF and it sends it back to the AF. AF simply logs it. So this is the flow. So I'll be showing the demo right now. And yeah, just to explain the deployment and configuration. So the deployment is, we will be using Karnos AMCOP platform. So this simplifies the deployment and the configuration part. And all the network functions what we have seen, NRWDAF, NRF and application functions can be deployed on multiple edge clusters as well. But we can do it with AMCOP. But for this demo, we'll be deploying all this in a single edge cluster. And we'll see the interaction between these two with the help of AMCOP platform. So I already have a pre-recorded demo, which I'll be doing right now. So if we just have a quick look at it, what is the setup is going to look like. AMCOP platform which is up and running. And we have onboarded one of the edge cluster to this. So this edge cluster, all the NFS or AFs will be orchestrated. So this is our AMCOP platform UI. So if you see the highlighted one, I have onboarded one of the edge cluster. So first one is the NRF service, if I just say NRF service, service will be having multiple applications. The first application is NRF application. So application has been created, but we need to upload the corresponding Helm charts. So we have created a Helm charts and I'm uploading it right now. Okay. Now NRF depends on the MongoDB. So now I'm creating the MongoDB application. Again, the corresponding Helm charts has to be uploaded here. So I'm doing that right now. Now NRF service has been created and it is having two applications, NRF and the MongoDB. Now we will go to the next service creation, which is related to NWDAF. So again, it will be having a NWDAF application first. And then corresponding Helm charts, which I'm uploading it for whatever the implementation they have done. And just for the simplicity, I'm adding the AF application function also as another application to the service. So, yeah, so this we can create all together a new service as well. So now NWDAF service also created, design time is over. Now we will go to the runtime, which is service instance creation. The first one is the NRF, creating the NRF instance. Okay. So while creating the instance, we can select the target edge cluster. As I mentioned earlier, I have already onboarded one of the edge cluster. I'm selecting the same cluster now. The next one is the NWDAF. So even for NWDAF also we are selecting the target edge cluster. So now we have created both the service instances. So just before creating the actual instance, that means which will orchestrate. So we will check the target edge cluster to see whether it has any ports running or not. So you see right now there is no ports are running. So now we'll continue to watch it. So service instance has been created for NRF. Yeah, now it's orchestrated, it's up and running. Now the next one is the NWDAF. Now, if you see now, it is instantiated total three. So analytics AF and NWDAF and NWDAF internally depends on another port, which is CPU prediction model, microservice, all up and running. So these two are for NRF and this is for the NWDAF. Let's take the NWDAF logs just to see how exactly the interaction. As I mentioned earlier, the first step is to register with NRF. So the highlighted one is the NRF registration. It's successfully registered to the NRF and it has given the endpoint to that. So after that it starts serving the request. Now you will see one of the request which is handled by the NWDAF. And the highlighted one, if you see it receives the request and it's served the request. Okay, so it internally calls the CPU prediction model microservice. So that is what the highlighted one. So it gets the response from the CPU model microservice and the response will be embedded into the analytics before response. So that's it about NWDAF logs. Now we will see AF logs. So that means from the client perspective. So in this case our AF application function which is running periodically. So we will take only the one occurrence. The first thing is like it will go and call the NRF to get the NWDAF endpoint that is what highlighted. It cuts the NWDAF endpoint at the end point and it calls the NWDAF with analytics before request. It receives the response from the NWDAF. It cuts the, what is the CPU prediction and what is the corresponding instance ID and the corresponding status like all those information. And it's simple on this. This is the clear view of the data what we received from the NWDAF. Yeah, that's pretty much on the demo side.