 Hi folks. Welcome to today's webinar. We'll just give it a few more seconds or another minute or so to let more people join before we get started. Thanks for joining us. All right, let's go ahead and get started. Thank you everyone for joining us today. This is part of our series of Linux Foundation networking webinars. Today's discussion is on integrating own app with a 5G cloud native network. Our speakers today include community experts Hanan Garcia with Red Hat, Amar Kapadia with Arna Networks, and Sri Ram Rupa Naguta with Arna Networks. Okay, and now we're going to go over what our agenda looks like for today. We're going to start with an introduction to cloud native 5G in the original KubeCon demo from last November. An overview of cloud native 5G and own app, what the demo setup looks like, and then we'll go through the actual demo and discuss what's next and open it up to Q&A. Before I hand it off to our speakers, just a couple of housekeeping items. A recording of this webinar will be available starting tomorrow on demand. Anyone who's registered for today's demo or today's webinar will receive an email with a link to the on demand version. Also, we encourage you to ask questions. There's a Q&A tab. If you have any questions that arise during the presentation, feel free to type it in. Our panelists will be answering some of those questions as they arise, but otherwise we have time at the end allotted for open Q&A as well. All right, without further ado, I'm going to hand it over to Hanan who's going to kick us off today. Thank you very much, Gio. And thank you for this opportunity. I'm really happy to be talking about the work we have been doing around the 5G cloud native demo, especially after QubeCon and what we are planning for the future. So maybe let's get started. As most of you probably are aware of, 5G is bringing incredible capabilities to mobile networks. Ultra-reliable low latency for critical applications to one millisecond. That definitely will improve the experience for autonomous vehicle and augmented reality. There is enhanced mobile broadband that 5G definitely will change how much bandwidth a single customer, a single user can have. We all have seen some results within the gigabit range and even more multiple gigabit range for that bandwidth. Then there is as well the IoT support with the massive machine type communication. 5G will support up to one million devices per square kilometer. This is actually 100 more times that what 4G is providing today. But 5G, beyond that, 5G is a reality. As November last year, during the presentation at QubeCon, we had around 50 commercial networks in 27 countries. Today, as from this month, there is already 76 commercial deployments in 43 countries. This is just the beginning of the 5G rollout. Many of us have already seen that the current situation with the COVID-19 pandemic would just be an acceleration factor for this adoption. Regarding the visa opportunity, this is still ahead of us. It is estimated at more than $600 billion by 2026. It represents a large opportunity for everybody in the industry and inclusive for B2B with private 5G and industrial 5G deployment that we were starting seeing that would be needed in that category. One of the testimonials of the adoption of 5G is it was the president of Fuqiao from China Mobile. She took the stage together with Header Kirste from the Lino Foundation and Sarsajiv from Red Hat. She was presenting the vision of China Mobile for 5G. During that presentation, Fuqiao described the role of open-source software along with some architectural elements that she presented. One of those was the evolution of the NFE infrastructure toward cloud-native infrastructure. This is something that is crucial for the evolution of the network and as well the relevance of the orchestration of services across that infrastructure. As you can see on the slide, they are expecting many, many sites to be deployed. In this case, they were mentioning about using on-app as the orchestration for those services across the infrastructure. One of the learning that I myself retain from Fuqiao is that 5G is much more than just a new generation of network. It represents an opportunity for telecommunications service providers to modernize the network infrastructure. Regarding the question, why do we need cloud-native for 5G, there are many reasons for that. As you build out an afrastic tier for deploying 5G components, either the radio or the core, you would want to use the same afrastic tier to deploy as well edge applications. Basically, you need a common afrastic tier for applications and network functions. Once you do that, you can centrally manage the lifecycle of those applications. You can imagine a 5G network functions to be just another application, so one single operational mode for 5G services. Cloud-native provides as well some capabilities by its own nature. High resiliency, flexibility, scalability, characteristics that are already present and native to those environments. You can move workloads around as you can create multiple instances of those workloads based on demand. That is actually what you really would like to have on your next generation network. Button down, cloud-native provides a lot of benefits that 5G would require to meet network demand and to avoid infrastructure inefficiencies that we have seen in the past. I would like to take just a couple of minutes now to show you and describe what we did at Qtcome last year. As you remember, we were presenting the proof of concept of a 5G cloud-native 5G network live on stage. One of the things is that we didn't show just one network, a cloud-native network. We actually show two networks and this is what basically we built. We built a full cloud-native 4G network in Sofiani Police. Where we have the radio components and the core components all containerized. We have then in North America between Montreal and San Diego, we have a non-standalone 5G deployment. Meaning we have a 5G radio, a 4G radio connects to a 4G core. And that deployment was as well distributed between Montreal where actually the core of the network was built. And the radio access that was brought to San Diego, that's what you actually show, I can see on the video, that was next to Heather and Asar during the presentation. We have used as well the public cloud as a resource to deploy network functions as well. Everything in all the network functions were containerized across all the environments, whatever it was in Sofiani Police in Montreal, San Diego and the public cloud. And especially on the public cloud we have the IMS network function and some capabilities for monitoring as well. So we have actually two solar stacks, one for the premises and one for the public cloud. And what you can see here as well on the picture is actually the cold flow that was executed from San Diego to Sofiani Police. And you, as I described already, Sofiani Police, the user was connected to the IMS network on one side in the public cloud. And then on the other side from San Diego, the other leg was connected as well to the IMS to be able to establish the core end to end. All the cloud were connected, all the premises were connected to the public cloud using a containerized SD1 function as well. That is high level what was presented at Cubecom. And I think it was time as well that we start looking at what the evolution of this would be. And that is part of the today webinar. All right, Hanan, that was great. I have a question for you. Yes, Ammar. So this was really as far as I can tell the first cloud native 5G network demo. And I want to kind of get a sense for what did the final demo experience look like? How was the excitement on the Cubecom floor? So can you maybe talk about that a little bit? Yes, absolutely. It was phenomenal. It was phenomenal. I think we have a very great reaction during the demo. Of course, it's something that took time to build. But the reaction was just a phenomenal to be able to present for the first time a cloud native model for 5G and 4G networks. And as I mentioned, everything was containerized except for the radio and the Faraday case that we were using, of course. But all the components were built to be executed in a cloud native environment. Okay. I have a question. So how long did you guys take to create this whole demo? It took us a few months. I will say around four months to build the demo. It was quite complex because we have teams all over the world. We have teams in, of course, in Sofianti Police, in Montreal, in San Diego and Raleigh, in China, in India. There was around 80 to 100 people at big time working on this demo. And I think it was 14 more close organizations working on this. But yeah, it took us a good four months to build the complete set across the board. Great. Thanks. Wow. That's actually quite impressive. So I have one other question. How has the interest level been after KubeCon? That's a good question. And just to de-mediate moments after the KubeCon was already incredible, the number of demand that we have around this. I have myself been giving this presentation to many service providers across the globe, with only other members as well of Red Hat and the community. Yeah, the number of questions and the number of what we did on how we did it are there. I think just the adoption of this was phenomenal. And we're looking and everybody's actually asking the question is, what is next? Okay, thanks. And regarding that question, I think I'm going to give you the turn, Amar, to give us an overview of what is, what is on app and what is coming. Okay, thanks, Hanan. So I will be talking about a little bit about the introduction of own app in the current demo. So I see actually a question from Subran Shoo on which demo orchestration or orchestration platform was used in the demo. So actually, there was no orchestration platform in the original demo. So the demo that Hanan just talked about for KubeCon, it was deployed manually. Now as you saw, Hanan explained everything except the radio in 5G is going to be software. So it's going to be entirely software driven. And when you start talking about tens, if not hundreds of thousands of radio sites, you start talking about things like network slicing in the 5G core, extremely dynamic nature of the network, you need automation. There is no way a human being can or a set of human beings can manually manage this network. So for this reason, we decided to bring in the automation component in this 5G cloud network demo, cloud native demo, and that was done by the introduction of an open source project called own app. Own app stands for open network automation platform. It is part of the Linux Foundation networking umbrella. And I'll give you a little bit more background in the next slide. So as per the website, own app is a comprehensive platform for orchestration. That's when you initially distribute all the software bits to the right places, configure them for day zero management. This is ongoing life cycle management steps like upgrade, change management, healing, termination, and automation. Automation is where you're monitoring all the events, alarms, metrics coming from your applications and the infrastructure. And based on that data, you take action, either it's a life cycle management action or some other action, and that's called control loop automation. So if network and edge computing services for network operators, cloud providers and enterprises. So as you see, it's designed, not just for network operators, but also for cloud providers and enterprises, real time policy driven orchestration and automation of physical and virtual network functions. So when we say virtual network functions, we imply cloud native containerized network functions also enables rapid automation of new services and complete life cycle management critical for 5G and next generation networks. So that's own app in a nutshell. Own app has tremendous momentum. The contributors, the end user contributors on own app represent over 70% of worldwide mobile subscribers. So it was kicked off really by AT&T, China Mobile and others. But since then, you can see the Canada, Orange, Geo, Deutsche Telecom, Swisscom, Vodafone, Verizon, KDDI, Turk Telecom, China Telecom, Telstra and Telecom Italia have also joined us contributors. Another key point about own app is it is highly aligned with open standards development organizations. So for Etsy, Etsy as everybody knows is a pioneer in NFV. So one app is very strongly aligned with that 3GPP on all the 5G efforts, TM forum or northbound APIs or an alliance in terms of being able to manage the virtual virtual ran and open road them in terms of optical networking. So what is own app? So this diagram shows a modified Etsy diagrams Etsy created a reference architecture diagram for NFV. And this has been modified slightly to accommodate some additional functionality that own app has on the left hand side. You see the data path. The data path consists of commodity server storage and switches. The whole aim of this new software driven world is to not have specialized hardware and just use standard commodity hardware and build networks the way the big cloud providers are building their data centers. On top of that, you have virtualization software for compute, you have containers and virtual machines for storage, you have virtual storage, networking, you have overlay networking. And then we also have data plane acceleration technologies such as DPDK, SRRV, etc. So that makes the virtualization software and on top of that you have the workloads. So you have network functions, you have analytics applications and edge computing applications like AR, VR, etc. So that makes up your data path and the data path works on either Kubernetes or OpenStack. It could run on other technologies, but I think these two seem to be sort of taking the lion's share of what's called the NFV infrastructure. So that's the data path layer and the associated cloud orchestration layer. Above that, you have service orchestrator or the NFV orchestrator, which in turn talks to application managers such as the VNF manager. Then you have SDN controller that's a key component. And finally you have the service assurance or what's on the right hand side called monitoring and control loop automation. ONAP consists of what you see inside the red dotted line. So it consists of a service orchestrator that can take care of global orchestration. It has application managers. There's a variety of controllers that ONAP has. In fact, it has four. It has an SDN controller integrated for global SDN. The data center or virtual networking is still relegated to a component outside of ONAP. So ONAP takes care of global and another component takes care of the data center SDN. And finally there's a very strong service assurance component, which is shown on the right hand side. That's what ONAP is. And in terms of the interfaces on the north bound side, ONAP interfaces with OSS-BSS. It interfaces with e-services. E-services are nothing but portals created for end users to order services and big data analytics, which could be data lake type applications. Or it could be for training AIML engines. On the southbound side you have the NFEI and the VIM layer, which is Kubernetes or OpenStack. You can have SDN controllers on the southbound. And on the side you have the workloads. So onboarding of network functions and then orchestrating them. So this is a very quick overview of how ONAP fits in with other components. And in the next slide we'll look at ONAP in a little bit more detail. This is a very high level diagram of ONAP. ONAP consists of two broad components. One is design time and the other is runtime. And the reason this was done is we want design time people to be independent of runtime. We don't want to create any dependencies so they can do their work on their own. And then runtime people can do their work on their own. Now we are not trying to create silos, of course. There are multiple ways for these two sets of people to collaborate, but we just want to delineate the work so that they can make progress without, you know, waiting for each other. On the design side, there is something called service design and creation. It's a unified design studio in that you can onboard network functions and applications. You can create services. You can create policies. You can create control loops. And there's DCA stands for data collection analytics and events. You can onboard microservices and you can design control loops in collaboration with the DCA design studio. So there's a very rich set of design activities you can do. Also, ONAP has gone, I would say, out of its way to make the design process easy so that you don't have to be a developer with like a computer science degree to be able to do the design work. On the runtime side, the main component is service orchestrator. And like I mentioned, the service orchestrator has multiple controllers. It can call underneath. So the network controller is one application controller and there are other controllers as well. We are not showing all of them. And on the right hand side, you see active and available inventory project and that's used for inventory services. So through that you have a single source of truth. All network services applications underneath are tracked and it's a graph database so you can track relationships. Then you have a policy engine that's used for decision making. And last but not the least data collection analytics and events. All alarms, metrics, events, logs, whatever data you have from a collection point of view from the applications or from the infrastructure goes to DCA and DCA runs it through an analytics pipeline. The output of that goes to the policy engine and then the policy engine drives either the service orchestrator or one of the controllers to take action. And that's what forms the control loop so you can make corrective actions without involving human beings. Northbound side, we already discussed that you see services, OSS, BSS and big data. And on the southbound side also we discussed that you see cloud infrastructure or the OpenStack Kubernetes layer, third party controllers, which can include SDN controllers. And I failed to mention one important point. The ONAP service orchestrator can also talk to external controllers or virtual network function managers, element management systems, etc. So this was a super quick 10,000 or 100,000 foot view of ONAP. Of course, there's a lot more we could cover at a later point. So I'm going to touch up on one last point on ONAP. ONAP, in the community there's a concept of use case blueprints. So of course ONAP can support any use case from an automation point of view, but to highlight a few specific use cases, the community has created these blueprints. And the blueprints help other people see how to use ONAP. And it also provides direction to the contributors of ONAP to prioritize their work according to these sort of blueprints. There's a 5G blueprint. It's very strong in ONAP. Residential, there are a couple of residential connectivity blueprints. Cross layer, cross domain VPN for flat optical network as a service, optical networking for transport use cases, third party domain controller to use ONAP sort of as an LSO, life cycle service orchestrator with third party domain controllers underneath and voice over LTE. So with that, I'm going to conclude my section and we can proceed to the actual what the demo looks like. So, Samar, can you maybe elaborate a little bit about the kind of support that ONAP has for 5G in particular? Yes, like I said, ONAP is extremely strong on 5G. And I would say there are probably six broad components where there is activity going on. And I'm just going to highlight them real quick. N2N orchestration of a 5G network service, including the radio area network, 5G core and the transport that's supported. N2N network slicing. In the upcoming Frankfurt release, there's support for the CSMF and NSMF GUI workflows. There is work going on in terms of harmonization with ORAN, the OpenRAN Alliance for RAM support. There's support for optimizing networks based on performance management and fault management data. There's also some self organizing networks or SON support for assigning cell IDs. And finally, there's support for PNFs, big radios. And other than this community POC, there's also a new one that's come up in the Acreino community, Linux Foundation Edge Acreino, which is around private LTE and 5G. So I encourage everyone to sort of check that out. Amar, I have a question too. How is ONAP doing with regard to cloud native support? So ONAP is actually doing quite well. There's a project in ONAP called MultiCloud. And inside the MultiCloud, there's a Kubernetes plugin. And the first Kubernetes plug instance of the Kubernetes plugin was out about a year ago. So since then it's matured a lot. In fact, we are going to see some of that. So I would say through that project there's good support for Kubernetes. And I have, excuse me, Amar, I have another question too. So do I really need ONAP when I have Kubernetes for complaint and orchestration? That's actually a really good question, Hanan. And that comes up, a lot of people ask that question. So the short answer is yes. And the reason for that is Kubernetes is very effective when you have an application that consists of a set of containers that you want to deploy in one site or one cluster. But if you have 10, if not hundreds of thousands of edge sites, and you have composite applications that are spanning these sites, and you need to have intent-based orchestration, complex lifecycle management, which may involve migration, replication across data centers, or edge sites, and this hugely important concept of close control loops, then you really do need another layer on top of Kubernetes, which is ONAP. Thank you, Amar. All right. With that, I'm going to hand it over to Shriram. Shriram is going to talk about the demo setup and then actually show the demo. All right. Thank you. Thank you, Amar. All right. So we'll just talk about what we were going to show today in the ONAP plus 5G cloud native demo. So the focus of this demo is really the automation. So the editions from the demo that Hanan talked about at the KubeCon. So the two editions, one is essentially we've added ONAP as part of the demo. And it's posted on the UNH servers. So that's the difference from the KubeCon demo. And the focus obviously is, as Amar talked about, automation using ONAP. And we've also done some simplifications for this demo. So we've just reduced it to one NFBA location and also reduced it to just 5G core. So essentially we replaced 5G RAM with an emulator. And that's what we're going to show in the demo. And the goal of the demo is to show the onboarding with Ultron 5G core. That's what we've used. And demonstrate the deployment of the 5G core on the open shift. So that's essentially what we're going to show. And the next phase, which is kind of a post demo, is we want to continue this by integrating the 5G network slicing. All right. So that's kind of a high level view of the demo network. So on the left hand side is the 5G network service where it gets deployed. And on the right hand side is a control plane. That's where the ONAP runs. And as I mentioned, we're running it on the UNH IOL lab. So this is kind of a more detailed view of the demo. So on the left hand side, what you see is the ONAP, which is running again, as I mentioned on the UNH IOL. And it's running on the open source Kubernetes. And that's the NVI software. And it's running on a standard Intel servers. And on the right hand side is where we are running the Red Hat OpenShift. And that's where the 5G code CNFs, the container network functions, get deployed. And as you can see the 5G code, we're using Ultron 5G code. And there's Devan from Tornium and ATEMS and GFW. So essentially, they get deployed on the Red Hat OpenShift. Yeah. So this is a little more details about the demo. So what we're going to show is we'll show the Ultron 5G code CNFs onboarded onto ONAP using the SDC function, which is Service Design and Creation. It's one of the projects in ONAP. So for the purpose of this demo, we are actually pre-installing ONAP, because obviously that's a different process altogether. And the 5G network, 5G code network service is created in ONAP using SDC. And the next step is it's deployed on the OpenShift by using the ONAP Kubernetes adapter. If you remember one of the slides where Omar showed the multi-cloud and various adapters. So the Kubernetes plugin is one of the adapters to talk to the Kubernetes infrastructure. And in this demo, we're going to use the APIs for the runtime operations. So the design will be shown with the GUI, the SDC GUI, but the deployment will be using the APIs. As I mentioned in the previous slide, instead of using the RAM, we're going to use a GNB emulator from Ultron. So that emulates both the user equipment as well as the RAM. And also just a note is that in the today's demo, we're going to run entirely the UNH-ION on a single server. That's even the OpenShift, we are running on the same server as two different clusters. So that's the demo that we're going to show. So essentially two clusters on a single server, one running ONAP. The other one is OpenShift where we will deploy Ultron 5G core CNFs. And then we'll run the GNB emulator to show the connectivity. So that's the demo that we're going to show. All right, so now we jump to the actual demo. All right, so this is the server where we are running the demo, the single server where all of them are running. So we can see A1 and A2 are the instances where ONAP is running. And the CRC is the VM where OpenShift is running. So we deployed, for the purpose of this demo, we deployed a subset of ONAP, not the full functionality. So the first step is to register the OpenShift cluster with ONAP Kubernetes plugin. So that's what we're going to do with using the REST APIs. So you can see the CURL command which registers the OpenShift cluster. And then we go to the ONAP portal. Those of you familiar with ONAP, this should be familiar. So we log in as a designer. So as I mentioned, in case you're not familiar with ONAP, there is a design phase and there is a runtime. So what we're doing right now is the design phase of ONAP where we're going to be designing in the CNFs. So then we start the SDC GUI in the portal. So the first step is to onboard the VSP, what's called the VSP. So in this case, it's actually a container function. So we're going to call it alchron NGC, next iteration code. So this is the standard onboarding process in SDC, which is common to both virtual network functions as well as the container functions. So in this case, we are going to show it for the container functions. So now we're going to take the package and input that. So that's the zip file that has the package. And I'll talk about what it contains. So here I'm going to show you the contents of the zip file. So that's the package which we are onboarding. And so it contains the help charts for the container, the container network functions. So it consists of the values.yaml, the chart.yaml, and the templates, which contains the details about all the CNFs that we are onboarding. So that's the package that we have just onboarded in the previous step in SDC. So now we'll go back to the SDC. So now we proceed to the validation, where SDC will ensure that the package has the right contents. And then we commit and submit. So submit succeeded. The next step is to import that, which essentially creates a model for it. Yeah, I can see the Ultron NGC and we're going to import that. So we are still in the design phase of our own app. So this is all SDC. So here we certify that. So the next step is to create a service, network service. So we're going to call it Ultron NGC service. We input a description. So we're going to include the CNF model that we've just created in the previous step. So this is a drag and drop operation where you can drag it into the pallet. And that's how you create a network service. So now this network service that we're going to create as the CNF that we just onboarded. Now we distribute it. So the process of distribution is essentially distributed to the other modules of our app. So now the distribution is done. So now the runtime knows about this, this network service. So now we're back to the runtime where we can start instantiating it. So you can see that right now there are no containers running. So the first step is to create a profile in the resource bundle. So the profile is like an instance of this model that we just created. So like I mentioned, so this part is all using REST APIs. So we are directly talking to the Kubernetes plugin. So this step is to do the creating the instance. So this is the final step where the CNF actually gets deployed on the cloud that we've registered, which in this case is OpenShift. So now we've run the create instance. So with this, we should see the containers coming up on the OpenShift. Yeah, so that's it's done. So now we can see through the OpenShift command line interface, the parts. So you can see that there are a whole bunch of these NGC parts that are in the creating state. The OC command is basically the OpenShift command line interface for running the Kubernetes commands. So it takes a little bit of time for the containers to go to running state. Yeah, so now you see all of the containers running. So now the 5G core is running on OpenShift. So now what we're going to do is run the GNB simulator. Since we don't have the real RAN CNF here, so we're going to run it tested with the simulator. So for this we use Ultron's GNB simulator. So we are running that simulator now. Yeah, so you can see that now the simulator is talking to 5G core CNFs and it runs some tests, which essentially shows that the containers are, the 5G core containers are healthy. So you can see that the tests have passed. So essentially the simulator, make sure that the containers, the 5G core containers are all running and they're in healthy state. So that's the end of the demo. So essentially we show it using the emulator instead of the, with a real user equipment and a RAN network. So that brings us to the end of the demo. Okay, wonderful. We have a lot of questions. So in the interest of time, I'm going to speed things up so we can get to the questions. So I would like to thank all the contributors for this demo. A10, Arna Networks, Ultron, Intel, Coloom, Lenovo, Red Hat, Ternium and the UNH IOL Lab. So let's talk a little bit about what's next. We are going to show the next installment of this work at ONES, which is scheduled in September. So one very important thing to note is that these activities are not static. They're not a one-time effort. They're continuous. So the next installment we hope to show some network slicing. And that's what will be the next phase. And you can learn about it, the event at this link. How to get involved? Like I mentioned, these are not static activities. So please get involved. You can join the phase two of the demo, which will be around network slicing. You can see the link. You can get involved with ONAP, other LFN projects, CNTT, which is working on... CNTT R2 is working on NFVI from a containerized point of view. OVP is a compliance and verification program. There's also work going on between CNCF and the LFN community in terms of telco requirements at this next link. And you can learn more about the original demo that Hanan talked about at the last link. With that, we can address some questions. Great. As you said, Amar, there's a handful of questions here. We're just going to go down the list. So our first question is 5GC, MME, HSS, etc. Are they open source components? This particular demo does not use open source 5G code. It's using a containerized 5G code from a company called Altran. However, there are others, which we are keeping an eye on. OCN is creating an open source 5G. And of course, there are other efforts as well. Just to add to that, Amar, we have the working with the open interface software alliance that is open source. And they have actually built the 4G core for the QCAM demo. Correct. And they are also working towards 5G, I believe. Correct. Great. So the next question, can we move with public cloud on edge? So I think maybe a more accurate way to say that is can we move with public cloud technologies at the edge? Because the edge, the real estate is controlled by telecom operators, cable companies and enterprises. And in some cases, smart cities, it may be owned by the city. So it's not so much that you can bring the public cloud to the edge, but you can bring public cloud technologies. So all of the big three, Amazon has something called Outpost. Google has something called Anthos. And Microsoft, I apologize, I forget the latest term, but it's based on their art technology. So all of them have edge technologies that can be used. Great. Our next question is about the demo. Did you face any latency challenges? Shriram, you want to take that? So what we showed is just the control path onboarding the 5G code and really running a emulator. So we had, actually, we did not see any of the latencies in that part of the demo. Okay. Next question, other than AT&T, who is running Onap in production? In fact, there's a new piece that's going to come out that's going to talk about Bell Canada's use of Onap in production. And at the POC or pre-production level, there are just numerous teleco operators that are working on different, different use cases. And we don't have time to cover all of that, but sufficient to say that there's probably half a dozen who are in the just one step before production. Great. Next question, how does Onap compare and or coexist with Mano? So Onap is in the Mano category. It just happens to have functionality that is over and above that of Mano. So all of the Mano functionality in terms of NFE orchestrator, the VNF manager functionality is already in Onap. Great. Thank you. Is Istio required with Onap to manage with Kubernetes, which Onap release and version is, and the second question to that is which Onap release and version is used in the demo? So in this demo, we've actually used the latest one, the master, which is essentially Frankfurt. So even though Frankfurt is not officially out, so it's actually very close to on the Frankfurt. So that's what we've used in this demo. And in terms of Istio, Istio is not required, but both there is a strong move in Onap to move towards Istio from multiple points of view. So from a Kubernetes plugin, Istio can be in the NFEI on which CNFs and other workloads will be deployed. And in fact, Onap's installation and life cycle management project OOM is also moving in the direction of Istio. And I think someone just clarified the Microsoft technology is called Azure carrier edge zones. So thank you, Raja, for that. Okay, another question you mentioned Onap is used for multi cluster orchestration. How is the right cluster selected during deployment time? So there's a notion of intent. So when you deploy a composite application, the Kubernetes plugin has a number of concepts such as profile, which determines day zero configuration and intent, which determines where the composite application gets deployed. Okay, somebody's asking when we can get the recording that's going to be available tomorrow. Is it possible to the NGC package in order to test it with Kubernetes plugin? We can, the demo, of course, is open how the demo is constructed, all of that know how is available. The next generation core or the 5G core is from Ultron so that you would essentially have to get an evaluation license for that one component. And if you contact any of the three panelists, we can help you with that. Great. Thank you. What are the minimum hardware requirements for Onap installation? Yeah, so it depends on, you know, which components of Onap that you that you require for this demo, we've actually deployed only a subset that's required to orchestrate the CNS. But for the full Onap deployment, it's actually well documented in terms of the requirements. So the non-HA for development can be deployed and probably as few as maybe 64 virtual CPUs and, you know, close to 128 GB RAM. So that's typically what we use internally for our development. And of course the HA deployments will require more resources and also more number of servers. But the very basic minimum for testing and for development can be done on a single, you know, fairly powerful server. And if it's a minimum subset, it can even be even in a smaller server. Okay, and second question on that one is, is it possible to simulate each end to end 5G network slicing and orchestration by using Onap, by using the current Onap version, or should we wait for the next version? I haven't been answered already, Amarba, go ahead. Yeah, yeah, we have to wait for Frankfurt. Frankfurt has that support. Which is coming out just, it's around the corner. Okay, a few more questions. Now we have hardware acceleration from Melanox and Xilinx. How FPGA based, how is FPGA based acceleration supported? Hanan, do you want to talk about the acceleration in the original demo? Yeah, we actually, yeah, we use actually acceleration for the radio part of the hardware that was in San Diego. And to be able to actually, for the radio access network to perform, so we use FPGA cards and SRUB as well on the EPC side. And the technologies are available for acceleration. Okay, great. So we had a question about how this will integrate with Open Horizon, which is a new project with LF Edge. We may have to take a rain check on that one. I don't know if any of the panelists are familiar with Open Horizon. Yep, that's fair. Okay, is CNF using Helm 3? Yes, it is. Can you shed some light on CUPS support? Yes, so CUPS support is essentially a data path discussion, so control plane, user plane separation. So this, it doesn't really affect the latest demo, but having said that, on the 5G core side, of course, the UPF is separated from all the other control path 5G core functions. Okay, how was geo redundancy achieved in cloud native deployment shown here? What were the use cases that were tested and were data path accelerators also used like DPDK, VPP, etc. So I think that's a multi-part question. I think the first part was geo redundancy, right? Yes. So geo redundancy can be accomplished from two points of view. One is to have containerized application with replicated pods. So if the pods are replicated across, you have enough replication, if let's say you have replication factor of three, you get availability within the cluster. And then the second technique is to replicate across geographies. This particular demo, as Shriram mentioned, is in one site, but for the next phase, we are, those are exactly the types of things we are looking at. And if you would like, please do get involved. We are looking for sort of always looking for expertise in these types of things. Yeah, that actually leads to another question someone's asking about how to join on app. So joining on app, all you really need is a Linux Foundation ID. You can get it at identity.linuxfoundation.org. It's a two minute process. And once you have that, you can join, you can get on to the wiki, you can start participating, you can join meetings, you can start contributing. You can become a Linux Foundation member that affords you additional advantages, but it's not necessary. You can literally get going in two minutes. Okay, great. So I know we still have a handful of questions, but we are actually at time. So we will try to get to those questions. Via email, if folks do have questions, you can just email PR at LFN at linuxfoundationnetworking.org, and we will answer those questions. So I want to thank all of our panelists today. And thank you for everyone who participated and stay tuned for our next LFN webinar. And a recording of this session will be available tomorrow. Have a great day.