 Welcome everyone joining the LF Networking webinar on Onap Honolulu. We will start momentarily. Hello and welcome to the latest installment of the LF Networking webinar series. Today's topic is the Onap Honolulu release. Before we begin, just a few housekeeping items. The presentation slides as well as the video on demand of today's webinar will be sent out to all webinar registrants in the next day. We also encourage you all to send in your questions to our panelists speakers using the Q&A function as part of the webinar. And we will get to questions at the end. Today's webinar presentation speakers are Katherine Thefev from AT&T, John Keeney from Ericsson, and Suomenathan from Wipro. Lin Meng from China Mobile also contributed to the presentation. Without further ado, I'd like to turn it over to our first speaker, Katherine Thefev. Thank you, Brandon. Good morning. Good afternoon. Good evening, ladies and gentlemen. Welcome to the Linux Foundation Networking webinar dedicated to our eight Onap release, whose name is Honolulu. After a quick overview about our open network automation platform, John Keeney, Suomenathan, and myself, we will give you further insight about the new release content. John will present the new capabilities to support the Oran architecture. Keeney will share the latest announcement of our end-to-end network slicing news case. Quick refresher about Onap. So Onap is a comprehensive platform for orchestration, management, and automation of network and edge computing services for network operators, cloud providers, and enterprises. Real-time, policy-driven orchestration and automation of physical, virtual, and containerized network functions. Onap enables rapid automation of new services and complete lifecycle management that are critical for 5G and next-generation networks. The figures in the middle of this slide show the number of contributions since Onap inception. Over 200 developers from 30 organizations collaborate to deliver the Honolulu release. Onap resides in an ecosystem with standard design organizations where cost influence occurs for the benefit of the industry. Just to give you some concrete examples. The Honolulu repo, the Honolulu release, supports FC Sol-1, 2, 3, 4, 5, 7 standards. As part of an effort to better integrate and align Onap, 3GPP, and Oran, Onap so to integrate the Onap VES specification that stands for Virtual Event Stream specification as part of the 3GPP specification to drive standard-based alignment for fold management and performance management telemetry collection. The Onap committee also values cross-collaboration with other open source communities to orchestrate, manage, and automate a modern network stack. The Honolulu release contains announcements of three major use cases. The cross-domain and cross-layer VPN, the 5G self-organizing network, and finally the end-to-end network slicing. It also includes 12 functional changes, four best practices, and two global requirements. The concept of best practice and global requirement, which consists of design patterns applicable to the whole code base, was introduced to ensure that we keep announcing platform robustness. Significant new capabilities were added around end-to-end 5G network slicing, including three internal network slides to met management function components for run, core, and transport domain, and better compliance with 3GPP standard. SWAMI will provide more details later. The Honolulu release provides important updates to support cloud-native network functions, including configuration of the ELM based on CNF and seamless day one, day two operation, new functionality implemented in the controller design studio component, and swagger documentation for API of the Kubernetes plugin component that you can find in the multi-cloud project. Some on-app components are part of the service management orchestrator framework, with broader support for the non-rail-time radio-intelligent controller functionality. A new component, configuration persistence service, also called CPS, has been added to the on-app architecture, providing database functionality to persist network element run-type deformation. And John will provide more details about how on-app is supporting the Oran architecture later. Modularity has been an important topic in on-app to allow users to pick up and choose the component that they would like to deploy based on their specific use case. And Honolulu continue to advance modularity. DCE simplifies microservice deployment via ELM with new KPI microservices. A and EI include support for multi-tonal C. And our very intense team, the operational on-app manager project, now support IPv4 and IPv6 for all the components. Collaborating with industry through cross-open source community engagement and cross-influence with standard design organization. Developing impactful feature and function like 5G footprint, network slicing and Oran integration. Adding continuously more cloud native and modular capabilities, robust integration, implementing on-app best practice and global requirement in our source code to offer scalable, reliable and secure production readiness deployment. All of this make on-app a true enabler for innovation in support of future industry use case. While on-app has primarily been used by network service providers to support their network automation transformation and hand virtualization journey, the on-app committee also recognized the value of on-app in enterprise and vertical markets. A new task force on-app for enterprise business has been created accordingly. We welcome participation from new contributors that want to expand the applicability of on-app. Now I give you the floor to John Kenny who will tell us more about on-app Oran integration. John, I think you're on mute. Ah, I found the mute button. So hello, my name is John Kenny from Ericsson. I'll be briefly talking about on-app's work towards enabling the Oran vision. But I should probably briefly introduce a little background about Oran first. Okay, so the Oran Alliance is a new telecoms industry initiative set up about three years ago. It styles itself as a new standardization body for an open disaggregated interoperable ran. It was originally formed as a merger of the X-RAN and C-RAN initiatives that existed for a few years prior to 2018. The alliance was initially set up and continues to be steered by communication service providers or network operators. Now, together with telecoms equipment vendors and other interested parties, the Oran Alliance defines new standards for a more open and competitive ran supplier ecosystem. The main drive is to support openness in the RAN. This in turn should lead to faster innovation and to support a more intelligent interoperable RAN. The Oran software community or OSC is a joint collaboration between the Oran Alliance and the Linux Foundation to support open source implementation of Oran concepts and functions. In addition to developing new code, it is expected that projects will align with related open source projects, all in support of the Oran Alliance architecture and specs. Even though OSC is a relatively new initiative, it is quite active and today's OSC has had three software releases. So the Oran architecture, it's quite similar to and it's pretty well aligned with the existing 3GPP RAN architecture, but also includes some new functions and interfaces, including two new RAN intelligent controller logical functions, one in the RAN and one in the OEM or service management and orchestration layer, and a new open standardized E2 interface to support a more open disaggregated RAN functions. The O1 interface is similar to the existing 3GPP F-CAPs interfaces for network functions OEM, mainly with some tweaks to the models used. A new A1 interface is also introduced. There's also a focus on being able to optionally execute the Oran RAN functions on white box or cloud platforms, represented as an O-Cloud. In order to help with the execution platform management, including some aspects of RAN function lifecycle management, a new O2 interface is also introduced. So as we saw in the architecture slide, the service management and orchestration functions are needed to operate, orchestrate, manage, optimize the Oran RAN functions. I'll talk a little bit about how open source O-NAP can help with the Oran vision of a comprehensive platform for orchestration management and automation of the Oran functions, the platforms and the services. So, O-NAP has been around for quite some time now and is making significant strides with eight releases now, including the new Honolulu release. O-NAP is now reaching a level of maturity where it can fulfill many of the needs for an Oran SMO. Its maturity can be seen in its cloud native modular and robust network automation platform functions, but also in its use cases, infrastructure, the community, the ceremonies and governance and the aligned architecture used in O-NAP. So, O-NAP has established an extensive community at this stage with almost all key industry players involved. And this means that O-NAP is now the main industry focal point for open source network automation and management. The functions provided by O-NAP, which are themselves modular and cloud native, form the key building blocks, while the community, use cases and processes and architecture informed the work to keep it on the right track. O-NAP's work is now additionally targeting the Oran Alliance specifications and models for how the RAN should work and be managed. And just like O-NAP, the Oran Alliance is also driven by solid real world automation use cases. The vision is that O-NAP and the Oran Alliance and the software community and the open source community at large, including OSC, that we can work together. In such cases, we hope to see downstream reuse from O-NAP where other projects can use existing SMO and OAM platform functions from O-NAP when it makes sense. We also hope to see upstream contributions to O-NAP from other projects, such as SMO and OAM requirements, use cases, extensions, POCs, configurations, deployments, etc. Again, where it makes sense. Also informed by O-NAP and other open source projects, including OSC, we can together help steer the Oran Alliance standardization and test activities to support real openness and interoperability. A key message here is that we should keep pushing for alignment and cooperation as fragmentation or divisiveness can only harm the entire industry. So let's look briefly at some of the functions that an Oran SMO might need. And you will quickly see that many of these functions are already worked on in O-NAP. To call out just a few, in O-NAP Honolulu, we have seen some useful SMO related progress in many areas. For example, they continued hard work from the CCSDK, ASD and our team towards supporting the Oran 01 interfaces, with lots of work still ongoing around 01 modeling aspects with the Oran Alliance. We've seen proof of concept demonstration of model driven control loop lifecycle management. We've seen improvements in the policy framework. We've seen improvements in service orchestration and cloud native network. We've also seen improvements in network function upgrade and PM data collection management. Support for the 01 interface continues to improve, while VES support and DMAP marches on. The configuration persistence service that Catherine mentioned earlier is now started as a new project. This of course is a key requirement for any RAN automation platform. Other key functions like STC and others continue to improve, but of course there are many other advances that I just don't have time for to mention here. But we do see that O-NAP still has a lot of work to do and plenty of work to do in other projects as well. So here we are back again at the Oran overall architecture. I hope you'll indulge me for just a minute while I explicitly call out some of the work that we've continued in Honolulu to support Oran's new 01 interface. I've already mentioned that Oran introduces two new logical functions for a more intelligent RAN. The non real-time rig or RAN intelligent controller in the service management and orchestration layer and numerous near real-time ricks in the RAN domain. A new 01 interface is introduced to help these functions enable more flexible, fine grained and dynamic optimization and assurance capabilities in the RAN. The 01 interface is defined to support three types of operations with 01 policy management for applications in the non real-time rig to pass guidance policies through the RAN. We also have 01 enrichment information where you can pass additional information down to the RAN to improve its operation. We also aim for supporting AI and ML functions in the RAN, but the exact details on this are still being defined. As part of this work, we've introduced a new function in CC SDK to act as a controller function for 01 interface. In 01-NAP we concentrate on the 01 policy part of the Oran 01 interface by introducing a new 01 policy management service along with an 01 adapter function that's incorporated into SDNC. Here typed 01 policies in the RAN can now be quickly created, viewed, updated and deleted in a simple and intuitive way with all access and synchronization handled by the 01 controllers. But of course I've been known to talk about this all day, but I need to move on. So I have focused on the automation platform functions provided by 01-NAP so far, where most of these functions are relevant for Oran. But we also need to mention some of the other work that's happening in 01-NAP, and I want to draw attention to the cross cutting work that happens in the use cases, with a special mention for the 5G slicing and SON use cases that Swami will be presenting next. Of particular note is the formal alignment happening between the 01-NAP 5G slicing project and the Oran Alliance Slicing Task Group. The key to 01-NAP is the 01-NAP community with most of the telecommunications industry represented in some way, but it's actually quite a small community really so it remains critical that we avoid fragmentation in open source initiatives and support standardization. We see that cloud native and openness is a key driver for Oran and for 01-NAP with a focus on modularization, reuse and ease of integration. There are many streams of Oran in 01-NAP supporting this vision, too many to mention, and 01-NAP Honolulu is a key step towards realizing this vision. I need to stop here though and pass over to Swami. So please Swami, take it away. Yeah, sure. Thanks John. You probably should stop sharing. Okay. Hello everyone. So I'm Swami Nathan from Wipro. I'm colliding the network slicing use case in 01-NAP along with Linmen from China Mobile who unfortunately could not join the presentation today. So I will be giving a brief overview of the network slicing, let's say orchestration work that has happened in 01-NAP since the Frankfurt release. Because, as you know, network slicing is a pretty complex subject and it has a lot of parts to it. I will be able to do justice only at a high level in the short time today. But for those of you who want more technical details or want to go into deep dive on some of the aspects, you are welcome to reach out to one of us or as well join in one of the weekly network slicing use case calls as well. And before I start, I would like to acknowledge that this use case has been the culmination of the efforts from a number of contributors. I think quite a number of contributors from several organizations, whether it is service providers, vendors or software providers, right? So it's been a mix of contributors across organizations and the result of all that is what I will try to summarize today. So maybe for those of you who are a little bit new to the slicing, there are three slice management functions or at the orchestration plane. There are three functions for slice management, which is defined by the 3GPB. So the topmost one or let's say the one which typically interacts or sometimes maybe even part of the OSS is the communication service management function, which is responsible for obtaining the inputs on the communication service that the operator or the slice consumer may want. And then it will be translating it into network slice related requirements. Now here in this context, one thing to notice, the service is not like, for example, an individual user making a voice call or a video call. Here we are talking at the level of the EMBB or the MMPC or the URLC kind of services. And then it passes on the slice regulated requirements to the network slice management function, which is at this next level. Now the network slice management function is responsible for the end-to-end management and orchestration of the overall end-to-end network slice within a service provider network. It is responsible for deriving the subnet related requirements, stitching together all these slice subnet instances and then forming an end-to-end slice and then obviously monitoring and then taking care that everything is fine in terms of the life cycle. The slice subnet management function, which is at the bottom, it is at the next level, next level of granularity wherein it is responsible for the slice instances within each subdomain. So for example, there will be a slice subnet management function for the RAM, maybe one for the core and then one for the transfer. Obviously there can be more than one, but at least one you will have typically per domain. So now moving on to the architectural options that was chosen in ONAP for the slice management functions, obviously there are three management functions. So if you look at the combinations, there are eight possible combinations, depending on whether the management function is within ONAP or outside of ONAP. And we decided that out of the eight, only five make sense from an ONAP perspective. And even out of the five, we chose the scenario four that you see on the slide, as well as scenario one. Let's say the ones that we should start with, taking into account a number of aspects. Like for example, we started off in the Frankfurt release with four, because that was relatively easier to start with. And we could also accomplish some kind of concrete functionality from an end to end slice perspective while being able to interoperate with an external NSSMF. So we did do that with an external core NSSMF in the Frankfurt release. And now it's also supporting connectivity to an external RAN NSSMF. Then since the Guilin release, we started off with option one, because we wanted ONAP to be kind of self-contained as far as slice management and orchestration is concerned. And also it enables the mix and match. So for example, you could have a couple of NSSMFs, let's say for the RAN and the core within ONAP, and then you could interact with an external transport NSSMF and so on. And then it also enables the remaining three options that you see with respect to the overall architecture, the two, three and five to be realized, let's say in an easy manner. So there will be only a couple of things that would have to be addressed in order to be able to support the remaining ones. So that was the philosophy. And obviously there were also a little bit of deep dive before we finally added shortlisted that we will continue to work on the scenario one in terms of further enhancements. Now looking at it from a lifecycle point of view, a network slice instance lifecycle point of view. So as you start typically, this is nothing new. This is a kind of a redefined form that you would have seen in the MANO specifications from HC for a service lifecycle management. So similar to that, we see for the slice, there is this preparation phase where you create the templates like the network slice template, slice subnet template, the service profile, slice profile template, all this whole bunch of templates. So that is part of the preparation phase where you design all of those, and then you onboard them onto one app. Then as part of the runtime phase, it starts with a creation. So the creation phase is actually where you create an end to end network slice, you instantiate a network slice along with its constituents. So when you look at the slice cycle of a slice subnet instance, you will have a similar picture, but which will be talking about the instantiation of the slice subnet instance. So the end to end network slice will be created. Now this creation could involve reuse of some existing slice subnet instances, or it could involve creation of everything from scratch. Then the activation phase wherein you open it up for traffic, and then the operation phase where you do this close to automation, that is you monitor, you collect the metrics, PMFM data, and then you take preventive corrective actions to make sure that the SLAs are complied. And then as a next advancement, what you do is you also make sure that you use only the optimum amount of resources, that is you kind of follow the traffic patterns in terms of resource allocation. Then when the slice is no longer needed for traffic, you deactivate it and then when the slice is no longer needed at all, you terminate it. Now the termination might involve termination of the slice as well as its constituents or just the slice alone. So this is something that has to be kept in mind because you can also reuse the constituents of a slice for, I mean it can cater to more than one slice. So that's something to be kept in mind. So as far as the overall work that has been done so far, I will touch upon it a little later, we have addressed most of these aspects so far, at least in some form. I wouldn't say everything is, let's say, fully available in terms of all different possibilities and different capabilities, but at least from an overall lifecycle point of view, all of the aspects are already addressed. And as far as the Honolulu release is concerned, we have been focusing on enhancements in the creation and the operation phase of the lifecycle of the network slice. And from the overall work that has been done so far with respect to network slicing, I think Catherine also talked about the alignment with the standard development organization. So I think for the network slicing also it has been a conscious effort since the beginning to be aligned with the standards because I think that is what will make the on-app slicing solution to be able to interoperate. I mean it's not a joke to have all those different architecture options that I talked about if we do not support interoperability. And for interoperability to happen, I think it's very important to align the standards of the interfaces as well as in terms of the functional split. So I will not read through everything that is there on the slide, but in summary right, between the slice management functions, the main alignment that we are looking at is with the 3GPP, the 28.53x series. And the northbound of the CSMF, we are looking at alignment with TM, TM forum, the TMF641 APIs. And we have just started off with some work on the TMF628 APIs. Obviously 3GPP does not get into the transport part of it in detail, so there we are aligning with the transport slice connectivity interface related ITF drafts. And as far as the southbound interface from the RAN network slice of management function is concerned, there again we are aligning with the 3GPP in terms of at least the network resource models, but also with ORAN with the O1 specification and also gradually with the O1 specification and also with the A1 specification as it is evolving. And also in the future, we also intend to align with the O2 specification that is part of the roadmap. And for the close loop, we are also aligning with the ITC's NSM. So just to summarize what we did as significant work as part of Honolulu, again I will not read through the slide. Mainly we focused on many aspects related to the NSMF. So we made a lot of functional enhancements in the slice management function like the template selection and then stitching together all these slice subnet instances with endpoints. Because the endpoints concept was introduced by 3GPP sometime ago and our initial version did not include the endpoint modeling and the associated functionality. So that is an enhancement that we did as part of Honolulu. And also the creation of the new NSI involving the slice subnet instance creation going all the way up to the network function or the domain controller as applicable. And as part of the RAN network slice of net management function, we also implemented the reuse of a RAN network slice of net instance when it is feasible. That is when the request allows it to do so and when there is a suitable instance existing. And also we achieved some of the close loop functionality involving the RAN subnet. And as part of the transport subnet slice of net management function, we implemented the modification of a TN NSSI adding new connection link and modifying the bandwidth of the existing connection links. And in addition we also did a couple of other things. One is we introduced the basic KPI computation related functionality which will further evolve in the upcoming releases. This will help the operator to monitor the KPI adherence of a particular slice or a slice subnet instances. And then maybe let's say to drive any actions that would be needed, that is one. And then also we started off with the end-to-end integration in terms of different combinations. You have the CSMF, NSMF and the three NSSMFs. So this brings up a number of combinations in terms of reuse, creating, modifying whatever. So all those combinations we started off with the end-to-end integration test. So this will continue further in the upcoming releases. But we have laid the groundwork for it. So just to summarize what works with the Honolulu release, right? So as far, I think when we go back to the lifecycle management, as far as the template design is concerned, it is supported for all the subdomains as well as for the network slice management function. And the same thing is for the slice instance creation. As far as the reuse is concerned, we support reuse at a network slice level, which means it also automatically includes the reuse of the constant events. However, when creating a new slice, if we want to reuse, a new slice can be created, but by reusing maybe one slice instance from one of the subnets while creating a new slice instance, subnet instance in another subnet. So this part is only as of now supported in the RAN and SSMF. For the core and transport, it is still work in progress. And for the activation part and the deactivation, we support it. But in terms of the configuration updates sitting to the south bone, that is a work in progress. And the slice modification we have just started, it's like the tip of the iceberg. So that is something that we will take up in the upcoming releases. And the close loop again, that's something that we have just started. We have started with the RAN subnet alone. This will then evolve further to the transport and then subsequently also to the core and SSMF. So as you can see, a number of things have already been implemented, but you can see that there is still a lot of room for enhancement and adding much more capabilities. Because of the, I mean, the number of features that are available for each of these, let's say life cycle actions. So we do intend to continue this work going forward in the upcoming releases. And before I end, just to give a quick recap on the control loop or the, I mean, the, sorry, the close loop automation framework that we have in on app. So the, at least as far as the network slicing is concerned. So as far as the close loop automation is concerned, here we leverage the control loop framework that is existing in on app and that has been used for other use cases and requirements as well. So in summary, right? So if you look at the bottom of the slide, you have data PM or FM data coming from the network, which is collected and analyzed by the relevant microservices. So by the way, we introduced a new microservice called as the slice analysis microservice in VCE, which will do the analysis related to the slicing. And then it will trigger the relevant point, it will determine what updates have to be done to the configuration and then trigger the policy. The policy will then determine whether the action can be carried out or not. The control loop will then get activated. And then the actor in this case, the service orchestrator acting as the ran network slice submit management function will in turn trigger the SDN controller to carry out the configuration update. And that's the loop is getting completed. Now, what is still, let's say pending is to extend it to further to the other subdomains like the transport and core and to the NSMF and to involve the optimization engine in this particular loop. And as you can see at the bottom, right? So we have already realized a couple of close loop scenarios involving the ran slice submit management function up to the Honolulu release. So this is just to give you an idea of the standards that we are have complied with or complying with or that we have just started the work with respect to the compliance. And we have also been interacting with the recently formed ITUT focus group on autonomous networks, at least especially for the close loop automation in addition to what you see here. Maybe before I end, I want to just recap a couple of points. One is that as I said, right, so whatever we have accomplished so far is definitely significant. But this still there's a lot of work to do in terms of the functionality as well as the additional capabilities that can be introduced and the sophistication that can be added to the network slicing solution in format. So for that, I would definitely welcome any contributors, whether it is in the form of concept or inputs or actual code or testing, whatever form, right, I think it would all be welcome. And so far, what what we have accomplished, right, whatever I told in the beginning just to recap again, I think it is all because of the contributions that was coming from a number of organizations with experts with different perspectives and experiences. Because I think that is very important in order to address a complex subject like network slicing to make sure that it is aligned with the standards and it is interoperable it is extendable and so on. And that's also the true open source spirit right I mean to have the true collaboration cutting across organizational boundaries. So with this I would like to end and maybe it's now time for some question and answers. So I think you might have some interesting question answers I think we will be happy to take them. Thank you very much. Thank you very much for me. Yes, we've been getting in a number of questions from our audience. Please keep them coming. And now we'll go through roughly in the order that they've come in. The first one was quite a long question but we've distilled it down to I think this brief version. Please be so kind to elaborate on how you assure and to end slicing compliant with defined UC service latency. UC requirements by 3GPP. I don't know if Catherine or Swami you want to take that one. So maybe I can take that question. So as far as the use case service latency is concerned right so this is going to be driven from the service requirements. So when the user or the slice consumer specifies a latency requirement, which is then translated into the service profile, which is then further decomposed to the slice profiles. This is going to play a role in the resource allocation and the network function placement as well as the RAN resource allocation. So as far as this is concerned to specifically answer the resource I mean the network function placement related aspects that's not yet integrated with the network slicing use case that capability exists in on app today. However it's not yet integrated with the slicing use case but we already do the allocation the RRM policy updates based on the slice profile and the slice profile as I said before it is derived in an optimal way taking into account the capabilities of the subnets. So for example if the RAN says I can support only a latency of 5 millisecond, we make sure that the service profile of 20 millisecond is not decomposed into slice profiles for the RAN which is lower than 5 because then you end up with an infeasible solution right. So those aspects are already taken care. So in short I would say some of it is there some of it is in progress. Great thank you and here's a somewhat related question around slicing and standards and zone up included GSMA, GST specifications anywhere for definition of slice. So I think I did an attempt to answer to this question. I believe we are using GSMA NG 116 version 3.0. If at all you would like to get additional information I also add the link where you can find further information about this standard related to network slicing and our implementation. Yeah, right, we are using I mean as of now we use the version two of NG.116 I think recently even a version four has been published, but we are here to take that. So the next question is around the 5G Super Blueprint. For the 5G Super Blueprint it may be necessary to release, manage across multiple LFN projects. ONAP, ODL, etc. Has this been considered? Is there a proposal as to how this might be done? I think Catherine that might relate to working upstream. Yes, so the 5G Super Blueprint is a cross open source collaboration ship involving indeed several LF projects and beyond. I believe that we are currently discussing with the different open source community work in data impacted by this vision because for the moment it's still a vision. It's not yet totally implemented, but we have different phases to implement the 5G Super Blueprint. It starts first with the socialization of the vision, meaning by that that people who build a vision are in the process to talk to the different impacted community like ONAP, like ORAN, and others to ensure that they are aligned with the vision. On the ONAP side, we have started to look at this vision and we have as part of our ONAP for Enterprise Task Force, we have started to assess how we could embrace the vision in order to move forward. We have also at the TAC level, the technical advisory consent suggested for the tracking purpose of the release from an end to end perspective across the different LF projects. We have suggested to create a federated JARRA. So JARRA is a kind of tool. We are already using JARRA to track any requirements, any user story apex bugs for ONAP. And the concept of federated JARRA was indeed to track the requirement across all the LF projects. So this is something which is currently put in place and which will help to have an end to end view from what's going on regarding the 5G Super Blueprint. I believe that's where we are. Otherwise, there is a bi-weekly call organized, I think it's every Tuesday, every bi-weekly Tuesday, where any community members, whatever your ONAP, ORAN, Anouket, you can join it and understand furthermore where we are with the 5G Blueprint. There are three phases, as I said, the first phase will be to try to interact, I need to refresh my mind a little bit, the MACMA components that have been recently taken over by the Linux Foundation. So MACMA ONAP integration, and also I believe we have a simulated 5G core and a simulated UI as the first phase. So I don't know Brandon, if you want to add any further information. That sounds good. I just put a link in the chat window to the 5G Super Blueprint landing page on the LFN website. That'd be a great place to go for more information and to learn how to get involved. Okay, so next question is around slicing. It looks like in 5G slicing, mostly custom workflows are used instead of the generic model-driven SO blocks. Is this true? And any reason for doing so? That's one. Swami, do you understand the question? Yeah, sorry. I just responded to the question, Brandon, on the Q&A. It is actually not fully true. If you see the way the workflows are written, generic building blocks, yes, we can use as we go forward. Certain things anyway would have to be specific to the slice orchestration or the slice subnet orchestration functionality. So if you take, for example, the slice subnet orchestration, which is implemented in this SO, we have made it modular. So there is a common workflow which is common for all the NSSMFs. And then there are three domain-specific workflows, one for ran, core, and transport. Obviously, with respect to the model-driven aspect of it, I would say many of it is already model-driven, but there are still some more things that can be done. For example, if an NSSI is composed of further NSSI, as in the case of ran, or if the NSSI is, let's say, the last level before you hit the network functions in their connectivity. So those things can be interpreted in a model-driven way by looking at the template and by looking at the constituents and then having the workflow built. As of now, we support only one of the scenarios, so it is written to support that, but as we support different options in terms of NSSI constituents, maybe for example, some deployment might have the core split into two parts and then there can be two constituents of an NSSI. Now, we don't want to rewrite their SO workflows for each such variations. So those aspects are something that you would have to improve further as we go forward. Great. Thank you. And I see a couple of questions from Saad. Looks like Swamy, you could take the first one through text. That one's a little more technical. Yeah. For the TN and SSMF, Saad, I would encourage you to look at the TN and SSMF flows and the API. So there, we are pretty much aligning with the IETF drafts, the TSEI related drafts. I think in the presentation also, you can find the links to the relevant IETF drafts. And as far as the APIs are concerned, as of now, we support the standard 3GPP northbound APIs from the SSMF, like for example, an allocate NSSI or modify NSSI and so on. But eventually, we also want to have the generic APIs that would be supported so that we can have the best of both. Whether it's like in a 3GPP dominant scenario, it would support the 3GPP interface and then also in the standard TN kind of world also it can support. And in terms of the functions and the capabilities for the NSSMF, I would say that we have made a start in terms of, as I said, like the connection bandwidth we can modify in certain attributes has been tested. But the focus of testing has not been so much, I would say, on the southbound of the NSSMF towards the domain controller. That probably we will have to do when we start interpreting with real equipment. Right now we have been testing the interoperability with the simulator. Okay, thank you. And let's switch gears a little bit. There's a question in around, what are the new features added to this release in terms of applying artificial intelligence machine learning in resource allocation and in closed loop automation? Do you want to take that one, Catherine? Yeah, maybe we have for the slicing, we have a machine learning based closed loop automation, where in the machine learning model is, I mean, the training is done outside of phone app. It's not part of a map. So the trained model is onboarded as a microservice on to the DCI component of phone app. And then based on the real time performance data that flows into DCI, this machine learning model which has been trained already, it provides the recommendations as to what reconfigurations have to be done as part of the closed loop. So this is one scenario that has been already implemented and is working. So I would request you to have a look at the use case wiki, if you are interested more on the flows and in terms of the functionality, which PM data we are using and so on. Okay, great. And there's a question around O-RAN. Is there a demonstration planned of the archived functions by ONA PANELULU with respect to the O-RAN interfaces A1, O1, etc. Yeah, maybe I can take a stab at that one. There's nothing I think planned in the short term to demonstrate either the O1 or the A1 interface functions in ONA PANELULU. Probably the best place to see demonstrations of the ONA functions would probably be at the DTF event scheduled in a couple of weeks. Other than that, I would suggest that a good place to look would be the O-RAN plug fests, which happen, I think it's about every six months or so. I think the next one is actually scheduled for November, which is quite a way away. Other than that, you will find demos and videos of the functions in the O-RAN hold a number of semi-virtually live sessions every couple of months, aligning with the mobile world congress events that happened throughout the year. In addition, you'll find videos and recordings of demos and functions on the wikis, particularly the O1 and A1 related work in OSC, which reuses a lot of stuff from ONAP, but also on the ONAP wiki itself. Other than that, if you could just ping a self-line, we'll see if we can dig out some other recordings or demos as needed as well. So thanks, John, for mentioning the next LFN technical event. I've just put a link in the chat window to that. It will be held virtually from June 7th through 10th with participation from several LF networking projects. It's also a free to attend, so I would encourage you all to learn more and register and participate in the LF networking communities. It's the best way to learn as well as get started. Okay, we're running low on time, but I wanted to ask one more question, and this one's for Catherine, and that's what's next for ONAP. So what's next for ONAP? First of all, the next release will be called Istanbul. We plan to complete it before the end of this year, probably in quarter four, and definitely in quarter four. The campaign, we start to gather requirements from the ONAP community and also outside the ONAP community. Our first milestone, call M1, is currently scheduled for the next week, I think May 20th. So that's where we will start really to kick off the release, looking at all the requirements which are submitted by the different team members that we call a requirement owner. They will start to interact with the different project team to see what can be accomplished and what cannot as part of the release. So normally what we expect, we will continue to innovate our 5G footprint in the domain that was presented today. It was in the network slicing and or an integration. We will continue to show our journey with FDO harmonization at C, if we have to mention at C, 3GPP, TM Forum, and we will have additional requirements focusing on control loop, code-native capabilities, and a taste of AIML with the intent-based networking as well. So that will be an overview of the Istanbul scope scheduled for Q4 this year. Fantastic. Thank you, Catherine. For those questions that we didn't get to live, we will respond to you offline. Reminder that an email will be sent out to all registrants with the slides as well as a link to this webinar video. I would like to thank each one of our panelists for their participation today on this webinar and also of course for their leadership in the LF networking communities. So that does it for this edition of the LFN webinar series. Stay tuned for more webinars being added to our calendar soon. Thank you all. Thank you. Thank you.