 Hi, thanks for joining us today. In the next 30 minutes, we will discuss how ONUP plays an important role in designing and building modern network services made of cloud-native network functions. I'm joined by a panel of industry experts who are leading the evolution of the ONUP project to the cloud-native network era. They will tell us how ONUP is transforming into this new paradigm and how you can leverage it today for your network services. Before we kick off the panel, let's start with a brief reminder of what ONUP is and the highlights of its cloud-native transformation. Catherine, can you please remind us what ONUP is? Thank you, Rani. So the open network automation platform, code name ONUP, is a comprehensive open source platform for orchestration, management, and automation of network and computing services. This platform can be consumed by network operators, cloud providers, and enterprise. So from between 2017, under the Linux Foundation on Rela, eight major successful releases have been delivered so far. The ONUP committee is currently working on Istanbul, the ninth release expected by quarter four this year. So real time, policy-driven orchestration and automation of physical, virtual, and containerized network functions, ONUP enable rapid automation of new services and complete life cycle management, critical for 5G and next generation of networks. Today, ONUP is successfully established as the de facto industry standard for NFB, SDN, automation, key stakeholders to accelerate 5G deployment, run enterprise, vertical market virtualization. ONUP is now on the right path for its cloud native journey. Back to you, Rani. Thanks, Catherine. So, Bjorn, could you highlight the journey that ONUP is going through to orchestrate cloud native services? Sure. This is about the ONUP-CNF orchestration journey. ONUP support hybrid services, covering BNF, PNF, and CNF by leveraging open source and standards. It is able to support both the Greenfield and Brownfield environment. For example, CNF on Vermero, CNF on VM, VNF on VM, and PNF. As an end-to-end orchestration platform, ONUP support DayZero on loading deployment. They want instantiation configuration and data configuration plus an upgrade. So it's not just for infrastructure orchestration but also application configuration and upgrade. For CNF and 5G, ONUP aligns with industry standards such as Etsy, 3GPP, and others. Not only conforming to existing specifications, ONUP working with other open source community, for example, OREN, develops and leads new specification for facilitating CNF orchestration handling and enabling effective orchestration. The application service descriptor project is a showcase for the effort by simplifying CNF modeling, packaging, and lifecycle management. By leveraging available ONUP capability, the vendors and operator don't need to start from scratch for their model and package management and orchestration of the CNF and 5G. ONUP provide the common infrastructures for CNF, VNF, CNF network service model and package onboarding, design and distribution, and support both SELine and cloud native orchestration. ONUP is also a 5G network slicing management platform which conforms to 3GPP standards. So you can work with other open source 5G slicing controllers which conform to 3GPP with some integration efforts. Briefly looking at the diagram, ONUP support onboarding of Etsy Sol 001 CNF, VNF, PNF network service models. You support Etsy Sol 004 for VNF CNF packaging and Etsy Sol 007 for network service packaging. And you will support the application service descriptor for CNF in the new future. ONUP design services based on the onboarded models and distribute resource artifact to the target repositories for runtime component operations. Lastly, ONUP orchestrate CNF and other network resources by interfacing with the platform infrastructure container as a service such as Covenant VIM and NFEI. This is a journey back to you, Rani. Thanks, Mil. Seshu, could you say a few words about how ONUP is already being used to orchestrate cloud-native network services? Hi, guys. Thanks, Rani, for giving me this opportunity to talk about this. So ONUP, actually to start with the OPS 5G is a project started by DARPA. DARPA is the Advanced Research Agency of Department of Defense of USA. So the main purpose of this OPS 5G was to actually provide an open, programmable, and secure 5G. As the name suggests, we are intended to actually have a 5G, secure 5G-based networking or platform which can be both open and programmable. So the collaboration is happening right now with respect to LFN, and ONUP is gonna play a key role in that. So as we can see from diagram here, most of the projects from LFN are participating in this, and ONUP has a very key role to play. So the project which is being done in the LFN is called the 5G Blueprint, or 5G Super Blueprint, and it's based on the OPS 5G. It actually demonstrates how a real-life end-to-end services can be orchestrated using mature open-source-based technologies. So when we talk about open-source-projected projects here, we are talking about not just one, but multiple projects, how they are interacting with each other, doing specific jobs. And the overall picture is to actually have an end-to-end secure 5G, which is most both programmable and open. And coming back to ONUP role in this, ONUP will be doing not just the initial part, it actually has been just said sometime before, the previous slide, he was talking about the designing, onboarding, and distribution of the packages. Well, ONUP does more to it than it also helps in the day-zero, day-one, day-two configuration. It also helps in the close-loop or control-loop, as we call it, which takes care of the post-instantiation checks, which is the monitoring part, the proactive and reactive missions, which have to be taken for any case, which is to be handled after that. So overall, ONUP will be playing the key role as an orchestrator, as a design and distribution of the packages, for the design distribution of packages, and also used for service assurance, which is basically the monitoring system, along with the policy driven and the clam driven, I would say, the closed-loop automation driven system, which will be securing us the complete and joint functionality of this to be intact. Over to back to you, Rani. Thank you, Sashu. And with that, it's time to introduce our panel members. So we have Catherine Lafevre, who is the ONUP Technical Steering Committee Chair, and she's also AVP of Cloud and SDN Platform Integration at AT&T. We have Byung-Woo Jun, who is the ONUP architecture subcommittee vice chair and a long-time contributor, and he's a principal engineer at Ericsson. We have Sashu Kumar Mudiganti, who is the ONUP service orchestration, or SO-PTL, also a technical steering committee member, and a lead architect at Huawei. Lukasz Rochevski, who is the long-time ONUP contributor, committed in several projects, and he's an R&D expert at Orange. And finally myself, Rani Haibi, I'm an ONUP TSC member, and the director of open source software at Samsung Research America. The first questions that we usually hear about this ONUP support for cloud native networks is how to get involved in this activity, how to know what's going on, how to take part either as a user or a contributor. So there are several ways, as you can see here in this slide. I would like to remind you that you're all welcome to collaborate on the ONUP cloud native journey. We welcome any type of participation. You can help us prioritize features, you can bring in new feature requirements, you can participate in designing and evolving ONUP's architecture. You may share your experience with us, if that's always welcome, so we can learn from it. You can contribute anything that could be documentation, design, and of course, code if you're interested. So how does it work? You, we have a task force that meets weekly every Thursday, as you can see here on the screen. We document our work on the ONUP Wiki, so there's a lot of good information there and there is a link here as well. We have a mailing list that you may use to ask questions or make suggestions. Please remember, all discussions are open. We welcome you to join. Don't be afraid, you don't have to be actively contributing. You can start by just listening into the conversation and following it and then later when you feel it's the right time, start being more active in contributing. But again, bottom line, as I said, this is an open discussion and we truly welcome the opinion of the end users. So please join us to this ongoing work. Another question that we get frequently is, what is the value add of ONUP for CNF orchestration? What does it provide on top of Kubernetes, for example? So Catherine, can you maybe say a few words about that? So sure, Adi. So the technical world, as you know, is known to have requirement to support network services. Whose components are spread across multiple computing clouds and regions? Therefore, having a centralized orchestrator like suggested by Facio previously that will handle various network functions in this multiple cloud environment is required. The first added value world by ONUP is that the platform knows network service capabilities through the modeling. The deployment and the lifecycle management of network functions are more complex than the application themselves. There is a need of supporting multiple interface provider network for service function chaining. Again, ONUP is offering solutions to solve the complexity of network function deployment and lifecycle management. ONUP supports cloud-native transformation hybrid deployment with physical virtualized and containerized network functions. That's also another asset. And we can also think that there is a need for showing the comprehensive status of the application level instead of each resource level. So ONUP can monitor the service distributed application and can also run through various analytics and gene and even act on actionable insights. So the platform provides monitoring, analytics, observability, capability of distributed application and control loop mechanism. That's also a great asset offer by ONUP. And finally, ONUP enable uniform and platform level service mesh security pattern by leveraging some science-based projects. So that's my feedback to you, Ronnie, about what type of adding value the ONUP platform can bring. Thank you, Catherine. Sounds like there's a lot to do and Kubernetes itself is not enough. So thank you for... educating us about this. So there is, as I mentioned, a lot of things are going on and there's a lot of functionality, but maybe it's useful to kind of understand what features were delivered with ONUP Honolulu release and what can we expect from the Istanbul release expected later this year? What is, and also what is planned for the future? So Lukash, maybe you can start. Thank you, Ronnie. So the recent releases that was developed, a lot of very useful features for the CNF orchestration. So starting impact from the capability for the onboarding of the native home packages, compliant with the version 3.5 of the home standard, we can with ONUP deploy such packages dedicated to Kubernetes clusters and ONUP participates heavily in the process of the preparation of the inputs for the instantiation process. Moreover, ONUP is very important in the process of the post configuration just after the deployment of the home packages. Moreover, with ONUP lately, we can perform the health check operation for the CNF, checking whether the deployed CNF is up and running and is ready for handling the traffic or other things. We have also possibility for the synchronization of the configuration between the CNF, DNF and other network functions. Moreover, we are able to track the status of the resources deployed on the Kubernetes and to fetch this information for the configuration purpose or any other kinds. And we can also create our custom workflows in which we can provide any customized logic that will integrate the CNF with our environment or other things. Thank you. And Seshu, maybe a little bit about who our ONUP is going with in that respect. Yeah, I mean, that's a pretty big question, Nairani. I'll try to brief it as much as possible because one thing is for sure, CNF is a pretty big journey. We just are scratching the tip of the iceberg with respect to what we have to do there. I think Ukash has given us a very good insights on what we have done so far. That is actually the basis of what we'll do in future. So one of the major things which we are trying to do right now is to integrate ourselves with standard organizations like the ETSI-3PP to ensure that we don't have any, we don't get into a situation which is non-standard based. We are working closely with the ETSI standard, which is also working in parallel right now to us to actually bring us some standards with respect to Sol18, also on Sol7, Sol4, with respect to the VNFD, NSD, and security. That will be one of the major challenges, one of the major works to do, and to make sure that we integrate with them. The other major contribution would be towards the integration to Kubernetes-based metrics, as I was talking in the previous slide about the closed loop or control loop operations, wherein right now one of the biggest challenges we have is to actually have the control loop for the CNFs, which is a must-have task or must-have feature for a production-grade system. So surely we'll be working on that with integration of Prometheus, which is actually happening on DCA. This will surely be setting a stage. The other things which we are also looking into is to see how we can leverage the existing systems without modifying the flows, and to ensure that the current orchestration flows themselves can actually take care of all the three resources. By three, I mean the VNFs, the PNFs, and the CNFs, and to ensure that we will actually have a complete service comprising of all these three, or any of these three, or a combination of these three, to be orchestrated seamlessly in one app without having any fuss. So this same platform is what we want to enhance further for different scenarios. The scenarios also include certain things which we are looking at right now, because the basis of this is a use case to validate. Again, I want to stress on the point that ONAP is not a product, it's a platform, but we use use case to validate the flows. And we are currently looking at 5G-based, as I said, 5G Blueprint is one of the key features which we'll be looking into to orchestrate and demonstrate the complete functionality. Also, we are looking forward for any sort of, I mean, collaborations with any partners here who would be happy to join us and provide us some CNFs which can be used or to demonstrate certain features. Rani, again, this is the part of what you said. I'll be just adding to what Rani just said, how to join us. One of the key contributions which we are looking forward is to actually have some collaborative work with respect to new features. We have been doing certain features of it, but that's not enough. We want more and more scenarios to be joining us to us, like we want operators to come up with the scenarios which are right now their big problem. We will try to find a solution together. With having said that, I'll get it over to you, back Rani. Yeah, thanks, Ashu. A lot of exciting stuff going on and a lot of work still needs to be done and it's gonna be interesting for sure. One of the questions we often get is about the CNF packaging or how to prepare the CNFs for orchestration by ONUP. So, Byung, can you help us understand what is the format of CNF packaging supported by ONUP? Is it HEM-based? Does it follow HCNFB specifications? And what is that ASD you just mentioned? Sure. ONUP supports ONUP proprietary CESA package with ONUP internal CNF models and Helm charts. We're using Helm chart. And second, SC-SOL-004-based CNF packaging, including again Helm chart and with SC-SOL-001-CNFD or coming ASD, application service descriptor. Application service descriptor is being developed as a new CNF modeling specification and it uses SC-SOL-004-based packages. Once ASD model and package are settled, ONUP proprietary CNF CESA package could be replaced with ASD packages. That's the current plan for our format, what ONUP supports. Back to you, Rani. Thank you Byung for putting everything in place for us. Another question that comes to mind is, as we know ONUP is not the only open source or network initiative out there. There are different open source projects and also standard development organizations that are working on this transition to cloud native network. So, Seshu, can you say a few words about how ONUP fits into the bigger picture that includes other open source projects and SDOs? That's a wonderful question, Rani, because I think it'll be part of what I just said before at the future plan. As I said, we are already working with ETSI. Byung also said that ASD is one format of it which we are together working on to have the packaging and the model structure. The proprietary package, as he said, is something which came from the OpenECOMP time in 2017 and then we have been evolving of it to integrate ourselves with multiple standards. ETSI is one of them. TMF is also one of them to have integration with respect to the inbound, I mean, legato-plus-toe sort of feature functionalities in the orchestration layer. The other things which we are working also are on the standards from 3GPP, from ETV Slicing. The Entrant Slicing is one of the key features which is happening in the ONUP. We are working on it since Frankfurt release, but it is five and sorry, release six in fact, release six onwards. So, we have been working on it and we have been evolving it right from core, the transport and the RAM. These all things are right now, VVNF-based. The core is actually what we have transformed to a CNF, that is also gonna happen to 3GPP standard integration. Coming back to the point of OpenSources, we are actually working with respect to Onuket. Onuket is one which is actually gonna help us in the validation and certification part of it. The other project, which we will be closely working on the XGVela. XGVela is the teleco pass platform which is actually a layer above the general pass. So, that is gonna be a good integration point for us for actually having the complete general pass right now. It's a black box for us. The cluster is not managed by ONUP. So, we will actually have that black box transform into gray and then slowly to white in the future. We are expecting that to happen with XGVela integration. Also, integration with EMCO, a new project which is right now in integration in the LFN is something which we are also looking forward to. So, this is a nutshell. I can say we have the integration points both from the SDOs. When I'm saying SDO, I'm talking about the standard organizations like the TMF, ETSI, the 3GPP and so on and so forth, all of them which are actually working on CNFs. On the OpenSources projects, we are also integrating ourselves with XGVela. We have plans to integrate with XGVela and EMCO is somewhere in future plan in our roadmap. Also, we have the 5G Blueprint is gonna give us a big feature where, I mean, as we have seen in the previous slide, it's a huge horizon where we have multiple projects including Magma from Facebook. Also, that is something which we are also looking at. Also, OKD is one, I think, OpenShift Covenants Distribution. That is a very big integration point which we are looking forward to because that's gonna provide us a lot of traction for us to manage our own clusters, which is not the case right now. So as I'm talking, I'm thinking more. So I think this is never ending, but yeah, I can say this is what is a short of what $10 possible integrations for us and what we have in future as a part of the roadmap. We are finding license offices for all these integration points. That's one key point which we have to consider. We also want the experts of these specific projects to come and join us because we want license offices to actually help us to have these collaborations to be more successful. That will be surely a success for any individual situation for both the projects. Thank you. Thank you, Sashu. So with that, I think we can pause here and maybe take a few questions from our audience. Please feel free to ask anything that comes to mind.