 Bonjour, I'm Catherine Lefebvre, the ONAP TSE chairs. As the industry moves to the native cloud, the purpose of today's panel is to share how we can bring together the best of the two words, ONAP and CloudNative. Six members of our ONAP CNF task force will be presenting our ONAP added value proposition. I'm pleased to welcome Shrinni Adepali from Intel, Sibo Perala from Nokia, Ala Goldner from Amdox, Reni Heibi from Samsung, Fred Olivera from Verizon, and Seishi Kumar from Awe. Over the past years, most of the carriers kept investing in their private clouds. They are now considering the migration of some of their network-related workloads to third-party clouds. With the technology evolving 5G AI, there is a need to host applications, data and devices differently. The management of this hybrid environment, private and public, raise complex challenges, like latency and bandwidth requirement, location and privacy concern, including security, resiliency perspective and more. In order to address these constraints, the CloudNative Computing Foundation has defined a reference architecture, in which we believe ONAP has a role to play. Considering this new multi-cloud and containerized environment, Shrinni, what will be the cornerstone of our CloudNative implementation? Okay, thank you, Catherine. Yeah, it is the best, in my view, described by giving two examples. So, what do you need service orchestration for? So, let's take the 5G use case. Many operators have a large number of edge locations at various places, like you have tens of thousands of edges at the cell tower or near to the cell tower and maybe thousands of central office data centers, maybe hundreds of local data centers, few regional data centers, and of course, people like to use even public clouds, even for some of the control plane or the analytics applications. And even with respect to those data centers, if you take 5G RAN use case, for example, it's becoming desecrated, like you have 5G RAN components such as DU, which may need to be deployed across multiple cell tower edges and CUUP, CUCP may need to be deployed across multiple central office data centers. And even if you take 5G core, some control plane functions like AMF, SMF, and user plane function may need to be deployed to the local data centers, as you see in the picture. And on top of it, the telcos want to go beyond just deploying network functions, but also their customer applications for new business opportunities. All these things, for example, need to happen in one single click, right? One should be able to deploy these things in one click, not only deploying, but also data operations. Let's say there is a new image upgrade, right, on the DU. So one should be able to upgrade the tens of thousands of DUs in, again, with one click. So the problem statement really is there are a large number of sites and the computing is basically distributed across multiple sites. And then you would have customer applications and you could have multiple customers, hence multi-tenance is very important. And you also need one should be able to deploy not only CNFs, which is, of course, becoming very popular for 5G, but there would be legacy applications with the VNFs. Also need to be supported. In addition to CNFs, VNFs, one should be able to deploy, as we discussed just now, able to deploy normal applications, enterprise applications, in the same Kubernetes clusters, for example. So again, we are saying that proven is becoming a choice of workload orchestrator for new 5G deployments. And hence, any service orchestrator expected to support Kubernetes-based sites in addition to whatever sites they have such as OpenStack. So as we see it here, this multi-edge cloud computing scale is similar, even higher to hyperscale automation. So hence, we believe telcos, even enterprises, managed service providers need at-scale orchestration automation solutions. Hence, service orchestration is very important. Thank you. Thank you, Srinu. So, Timo, ONAAP is successfully established as the de facto industry standard for NFV as the automation, providing a comprehensive platform for real-time policy-driven orchestration and automation of physical and virtual network function. What will be its role in the context of CNF? Thanks, Katherine, for the question. I believe everybody in this panel has heard comments like, what is the role of ONAAP with CNFs and is there really a need for ONAAP in the era of cloud names? I believe we all have come across those questions. So I think while it's true, ONAAP may not look exactly the same as for CNFs as it is for the VNFs, virtual network functions. We do strongly believe there is a definite role for ONAAPs still. So first of all, it is highly likely that the VNFs continue to exist still for some time, some time to come and they need to be managed and orchestrated. Second of all, PNFs or physical network functions, for sure they will continue to exist in telco networks. And it goes without saying that management and orchestration of those PNFs continues to exist. So for example, radio network equipment, equipment in mobile networks. So considering all that, we anticipate that ONAAP needs to be capable of managing orchestration and automation in a hybrid environment with all kinds of workloads, be they CNFs, VNFs or PNFs. In addition to that, then ONAAP keeps an end-to-end service view of the system. And this is where we see ONAAP bringing undeniable value. So orchestrating the services end-to-end. And those services may be composed of the different kinds of workloads that I just mentioned. So this will be the great value that ONAAP will bring with it also in the era of cloud natives. So with that, back to you, Catherine. Thank you, Timo. So we understand from you the added value of ONAAP with a service orchestrator. Alaa, what are the other values that ONAAP can bring for cloud native? So first of all, the most important thing that ONAAP is bringing to the cloud natives is that end in general to the whole network functions world, which includes physical, virtual, and cloud native function as that it looks on the whole service thing, not just the specific network functions and the types handled by their specific management system. But in general, the whole service, the service SLAs to maintain and the service characteristics to maintain, to orchestrate the service, to view the service from the inventory and to design the service. And I think this is the key difference and the key principle of the network management system that we are supporting in ONAAP. So going to some specific, engaging into some specific details of what exactly the thing that I just said means it of course uses microservices architecture as use cloud native principles by also supporting multi-cloud and the availability to run over any types of cloud. It manage network service and application lifecycle management across multiple wins within that multi-game multi-cloud whether it is OpenStat, Kubernetes, Azure, any vendor specific wins, whatever needs to be supported is supported even if there are several working for SM services under different domains. This is something that ONAAP support. Now, speaking of design, ONAAP design tool support multiple descriptors, Helm, Tosca, HIT, whatever is needed is supported by ONAAP. Some came along with ONAAP creation and establishment in Linux Foundation Networking. The other were added as we continued is our walk to enhance ONAAP. ONAAP OOF, which is the optimization for tool, for placement and homing, for choosing the right location to place workloads. Extremely important for 5G for many use cases to support low latency application. You need some core network functions for system to put close to the user. So this is what that tool does and this is something that cloud native by itself looking into the cloud native word only do not provide here. The overview is for the service and how service should be running. This CEO of course collects telemetry from remote site, analyzes them, generates any control loop action whether it is scale, view on the service level. And this is also what is supported in ONAAP. Of course, support of standard models and APIs by Etsy, by TEMAP, by MAP, by 3GPP. You know, some examples may include 5G network slicing support where all the development for end-to-end network slicing is based on 3GPP, Etsy, NFV, and actually Etsy's are the same specifications. Additionally, ONAAP of course enables the total configuration of network function via RESTful API, is Netcom Kubernetes Service, whatever is needed. And again, there is a variety which is not just limited to one method. And in order to view in the real time, you know, the whole service and how its resources function right now, the AI, ONAAP AI is the central repository, not just for specific cluster, but the central repository of the deep site, network element inventory, and network service status. So I guess again to summarize and to highlight what I said, ONAAP really is the service level end-to-end network management platform solution going across different domains, different whims, when I say domain, trend, transport core, everything different, transport along different whims, different cloud, public, private, specific types of cloud, and combining all those into the single end-to-end type of service. Thank you, Alain. Rani, how does ONAAP fit in the CNF landscape? Thanks, Catherine. So the shift towards the cloud-native architecture in telecom is a coordinated effort between several communities and organizations. There are four pillars to this transition and ONAAP has an important role to play here. While this landscape may seem confusing at first glance, it is actually a well-orchestrated endeavor. Each community provides its deliverables and there is little to no overlap. The first step in building modern network functions is defining the architecture for both the cloud infrastructure and the network function itself. Several groups within the Linx Foundation Networking and the CNCF are working on defining the requirements for such architecture, as well as SDOs like Etsy and the GSMA. ONAAP has an updated set of network function requirement to cover mixed workloads of PNFs, VNFs and CNFs. The requirements ensure that the network function can be a part of an end-to-end service managed by ONAAP. ONAAP requirements complement those provided by CNTT, CNCF, TUG, and ensure smooth orchestration experience. In the implementation phase, ONAAP provides end-to-end orchestration for cloud-native network functions. It enables building network services that are cloud-native and may also include virtual or even physical components. To make sure everything plays nicely together, tools for testing and validating the cloud-native services are being developed. The ONAAP community is working on the evolution of its VTP, the VNF testing platform, to CNTP to support cloud-native network functions. CNTP may be used as part of the batching program and is closely aligned with the LFN Compliance Validation and Certification or CVC program. The result is going to be a coherent set of requirements and tools for implementation, making it easier for both operators and vendors to smoothly transition to cloud-native architecture. Back to you, Catherine. Thank you, honey. Fred, can you tell us a little bit more about how ONAAP implementing Etsy specification? Thank you, Catherine. This is Fred Oliveira. I'm a fellow at Verizon and are working with ONAAP to enable some alignment with the Etsy specifications to allow orchestration and automation using some of the Etsy methodology. In order to leverage this, we're using several specifications from Etsy, including the ones listed on the screen here, Sol4 for VNF and PF packages, Sol7 for NS package, and various interface specifications, Sol1, Sol3, and Sol5. In the way we've integrated these things is we have our SDC component, which onboards packages and we've enabled Sol4 onboarding and design of these VNFs into network services for Etsy-based network services. That can then be leveraged by ONAAP to deploy services and work with an external or internal NFEO to deploy network services and the VNFs associated with that. We've also built in several adapters to enable communication with external VNF managers. Sol3 adapter is a way to interface with external VNF managers and the Sol5 adapter will allow connection to the Sol5 enabled NFEO. And then we also have some Sol2 connection to the EM environment for to have ONAAP act as an EM. Etsy is pursuing a definition of the container-based network functions and we're working with them to enable these specifications in ONAAP. These are currently in stage two specifications, I-511 in particular, to describe the containers and operations. And we intend to leverage these capabilities to deploy onboard and design and deploy container-based network functions into using ONAAP into an Etsy-based environment. And thank you, Catherine. Thank you, Fred. And finally, Sishu, can you give us an overview of where we are with our ONAAP Cloud Native Journey? Thanks for the question, Catherine. Long story short, the primary motive of the, I mean, the primary motive of this entire objective is to have this in picture as a, as we show in picture that we need to have the complete orchestration and the management of all the resources that includes the VNFs, the VNFs and the CNFs to be done by the ONAAP that being a centrally positive thing. So the primary motive is to have this possible as a stepping stone towards the Cloud Native Journey. So what we have done so far is that we have taken the initiative of having the Cloud Native, the CNF support in Casablanca of ONAAP and they actually work started as a TOC, the proof of concept from guys from Intel and other driven mainly from Intel guys, but also that we have joined the main task force here, where in the case, VNFM was introduced into multi-cloud. The intention there was to actually have the existing APIs around the VNFs and VNFs to be supported and also to leverage the CNF orchestration and a management with respect to that. So this close, as you've seen, the diagram are depicting different tasks which are involved in that. It's pretty huge task complex. The initial diagram may actually overwhelm someone who doesn't have the different string of ONAAP, but as you can see, there are multiple issues which have been solved in this as we tend to move ahead. So from there, what we have is this thing, which is what is current work, which is actually happening. So in a nutshell, the previous work actually had a Helm chart which was defining the complete CNFs model and also package in the heat package. So that was what we were trying to solve in current work. So the current work is more like give a Helm chart a first-class region ship during the day zero and day one operations. We are talking about the onboarding, the design, as well as instantiation work with here. So new resource step is being introduced during this season time. That new resource will be known to the both onboarding, as well as instantiation work of it. That way we will keep a Helm chart or Helm-based orchestration of the CNFs or the management of CNFs possible and we can actually take it to the other steps of it. So this is a one time, I mean this flow which we try to have here is actually apart from the various things, we also have delegation mode here with respect to the other modes. Like for example, we are also taking up the previous, as Rani rightly said, we are actually talking about ETSI-based K8s orchestration also for the external VNFM, which is a brownfield scenario. So such things are also being considered. The other key enhancements around this are around the Helm enrichment through profiling where user profile will be given and that will be considered for the orchestration perspective and all. So this is in a nutshell, I can say the major work which is currently being considered and what is there for future? The future is actually what we will require to have the complete end-to-end orchestration perspective of this where in the day-to-operations like what Allah was saying sometime before the scaling, the healing perspective should also be included. We also intend to have the integration which Rani just spoke about and also Fred coming from different SDOs as open source systems like the XG Vela, the CNTT, the CVC. Then on the ZEO side, we have the 3GPP, we have GSMA, of course, ETSI. So these are all things which have been considered from the perspective of holistic involvement of ONAP and orchestration of this and management of this with respect to other entities which are working around CNF. So the other main intention here is also to take into consideration the intent-driven orchestration which is also being worked as we try to do right now but the integration with that with the main flow is also something which we want to consider in future. So with that, Catherine, back to you. Thank you, Seshu. I also want to thank all the speakers and the ONAP CNF task force for this panel. I hope that the audience now understands that ONAP acts as a central network service orchestrator addressing lifecycle management of network function-based services. ONAP already deployed and managed VNF and PNF workloads across multiple sites and is now evolving to manage future services that will include additional CNF capabilities and CNF component. Thank you so much to everybody and then we open the floor for any additional question. Looks like there is a question. The question is what is the difference between PNF's VNF versus CNF in terms of data operations? I'll give my view from the VNF, CNF perspective, from the deployment perspective, I would imagine there won't be much difference from the data operations perspective. Many times CNFs are deployed. The intention of CNFs in general is that the configuration need to be up, configuration get applied to the VNF even when they get restarted by Copenhagen itself. You normally with VNFs in traditional environment and the VNF gets restarted or VNF gets restarted, somebody has to push the configuration again, maybe via netconf or maybe sometimes things get stored locally and they get basically reconfigured and it comes back up. But in the Copenhagen environment, having shared storages should not be assumed. In those cases, the traditional method in CNF is to get a whole lot of configuration via custom resources in Copenhagen, which gets stored in HCD. So whether CNF gets scaled out or whether CNF gets restarted or moved to some other node, things would suddenly work fine. So yeah, that's one difference I can think of, like in CNFs, people normally export configuration via corporate custom resources, whereas in VNFs and VNFs, typically it could be netconf, typically. So yeah, that's one difference I can think of. Thank you. I think we have also another question from the Q&A with speaker. Will ONAP support different Kubernetes operator framework like Kudo or the Red Hat operator framework to support more complex management of CNF when then what N can support? Yeah, I can take that. So Kubernetes operators, nothing but a CRB kind of a controller framework. So the purpose of operators typically is to simplify the deployment of a given application or network function, especially when the deployment involves multiple things to be done together, people tend to go with operators. But even the operator at the end of the day is initiated using some custom resource, the Kubernetes resource, right? That resource itself need to be kind of deployed from central place. So that's where ONAP comes into picture. So whether it is the operator, whether it is a set up discrete COVID resources that need to be deployed in a workflow fashion, that is a job of ONAP and 30th orchestrator. Yeah, so the answer is that, yeah, any 30th orchestrator like ONAP shall support purely Helm chart based deployments, also deployment using Kubernetes operators with some custom resources. Thank you. I think we have also another question on the chat room. Do you plan to include XC-MAC, multi-access edge orchestrator functionality to ONAP? So I think we lost threat, but maybe Seshu with the orchestrator, can you take this question? Maybe I can take it up, I mean, thank you, Shini. Yeah, to me, in my mind, at least, yeah, I mean, we are not thinking about XC-MAC, but based on my understanding, the ONAP with the Kubernetes plugin becomes a MAC orchestrator. I believe that it has enough functionality to call it itself as a MACO, MAC orchestrator, but there may be some additional things that we may need to do that needs to be planned. But most of the, I mean, I would imagine that 90% of the MAC orchestrator functionality would become part of ONAP, eventually. The only thing, one thing which we're addressing at this time is, we don't have it yet, but we are actively talking about is called traffic steering, right? That's one of the functionality which MAC requires, that is if a cloud application is replicated in edges and the traffic is going through the edge, we want the traffic to be, even though traffic is just tied to the cloud application instance, if the copy is locally available in that edge, we really want that traffic to be steered through that local application. So that aspect of it, we are seriously considering at this time. Thank you. Thank you, Shini. I think we have also another question. I don't know if you can still answer. Today the RAN, today the RAN is, ecosystem is containing PFS at VNF, like DU, CU. Any plan to introduce CNF for RAN in the future? So I will give a quick response. ONAP and ORAN SC are tightly coupled. If you look at the service management orchestrator, you will see that the ORAN SC rely a lot on the ONAP architecture. Mainly the most component of ONAP are part of the service management orchestrator. And therefore there is a close relationship with the ORAN SC open source community to ensure that we have a defined whole map that you are building together. So currently in previous release, we have integrated adapter like A1, but also O1, we are working on O2. And I guess when the ORAN SC community will start to tackle this, we will continue the relationship and the partnership. So it's a quick answer. And we would like to invite you to the Slack channels if you want to have more details, because I'm not sure we have still a lot to left for today. Any other words from the panelists before we go? No. So thank you so much for all of you for attending our session and hope to speak to you soon. Bye-bye. Thank you.