 So first of all, thank you for having us for such a great opportunity for sharing our project, TACA project, and its activity as titled as Integrated Orchestration for VM-Best 5GC and Continental-Best Run. So let's get started. My name is Yasufumi Ogawa and Hiromu Asahina and Yuta Kazato from NTT, which is one of the Japanese telecom companies. So here is the outline of our today's session. First of all, I would like to provide the overview of TACA project. And then, life cycle management for VM-Best 5GC and Continental-Best Run. Then we'd like to provide a demonstration. And finally, a conclusion and invite forum session we will have in Thursday. So the overview of TACA project. So what is TACA project? So TACA is an official OpenStack project building a generic VM manager for VM-FM based on Etsy management and orchestration architecture framework. So as described in the diagram, TACA is compliant. TACA is an implementation of compliance with Etsy NFV standard. And TACA enables operators to deploy and operate network services and virtual network functions on NFV infrastructure platform, such as OpenStack or Kubernetes or so. And TACA community is now trying to contribute to Oran SE, which is the Oran Alliance working groups to create a working software solution to enable Open and Intelligent 5GC run, which is our most intercity area. And this is the background of today's session. For 5GC network, including both radio access network and QA network, and orchestrate both CN and LAN in the same manner. And Etsy NFV defines NOSBAN interface service management for QA and Oran defines orchestration for LAN network. And for that, TACA has been developed as Etsy NFV standard compliant VM manager with some feedbacks to the standard and has become a SMO project of Oran SE, which means we have having a relationship with such a standard, getting feedback to implementation, and then return back to some feedback to the standards. And then, so I would like to more focusing on Etsy man architecture. So Etsy NFV define man architecture, which enables network operators to manage virtualized resources. And this diagram is overview of the architecture. And the right-side hand is described man components, including NFVO, VNFM, and VIM. So NFVO is for orchestration, and VNFM is resource manager, and VIM is infrastructure, including open stack over Kubernetes. And the man components are connected by standard interface. And they use the standard descriptor and virtual images as VNF package. And this is the TACA's role in Manu. TACA acts as a VNFM as shown in the red box and communicate with Etsy NFV standard APIs. And TACA manages life cycle of VNFM. According to requests from VNF or NFVO. And TACA enables operators to manage virtual resources and use of various VIM and VNFM software supporting the Etsy standard APIs. And here is a detail of TACA's architecture. So the original goal of TACA project was to implement Manu features defined in Etsy NFV, while we didn't focus on the design because the standard was not mature enough at that time, which means the components of TACA was not so compliant with Etsy NFV at that time. But now we are more focusing on compliance with Etsy NFV itself and have tried to redesign focusing on the VNFM feature defined in the Etsy NFV standard. So as fine-graded architecture is now compliant with Etsy NFV standard itself providing its features via standardized REST APIs. Okay, next I will explain life cycle management for VNFM to 5GC and container-based LAN. And in the requirement for TACA, so there are typical VNFM supporting use cases such as EPC, 5GC, MEC, and LAN. And especially virtualized LAN is a target for the latest Etsy NFV release 5. And TACA covers the 3G standard interface as a generic VNFM between VNFM and NFVO between VNFM and VNFM and BIM as a standard interface. And so we cover the following features such as my BNF BIM support and so on. And here is an example of a VNF life cycle management in Mano. And TACA, VNFM covers day 1 and day 2 operations such as configure VNF and run-time operations. And here is a new TACA feature for container-based LAN. And TACA Z and Android release enhance key feature for container-based LAN ahead of time. So there are four key features, enhance CNF life cycle management, monitoring and resilient, enhanced security, and Kubernetes support. So we will introduce some features in detail. In monitoring and resilient, TACA provides autonomous clearing out or healing in their own life cycle management. And to enhance resiliency, TACA implements fault management and performance management interfaces for monitoring their resources, compared with the prometes, a de facto standard monitoring tool. And here. And TACA provides a configuration function for prometes and get alert from prometes and transfer them to the alert of a standardized HZ format. And next, TACA also implements some future for security such as for further use case for CNFV, 3GPP, and commercial operations. OOS, mutual TLS is one of the common security method recommended by CNFV and 3GPP. And we developed them working with an OpenStack Keystone project and cover OpenStack infrastructure in this under-op cycle. And we also developing access control, API access resource control. And we extend the OOS law policy for the use case of the BNF LCM and commercial requirements. We can manage a fine grade access control by the user information and BNF information such as a company, a deployed area, and tenant information. And regarding content-based run, so we have integrated BNFM into an Oran software community. Oran software community is one of the Linux foundation project and supported by Oran Alliance to raise the implementation of the Oran specification in open source code. TACA can manage a network function deployment as a part of DMS, deployment managed service. DMS is a covering TACA scope for management in Oran software community. And here is a figure of Oran architecture. Oran architecture consists of service management and auto-installation, SMO, and some radio unit such as RU, DU, and CU on the org cloud as an infrastructure. And standardized interfaces are defined between each component. TACA now mainly contribute to SMO and org cloud as a HZNF-based generic BNF and manage org cloud deployment managed service DMS via auto interface here. And TACA has contributed to Oran SC. She from F release and the latest release is a Z release now. Working with TACA community, TACA expands the scope of support in Oran. And please kindly check the latest document in Oran SC or our TACA project in OpenStack documents. In Oran specification, so Oran cloudification and auto-installation specification utilize the NFB aspects. And now update auto-DMS to refer to NFB so 0.0.2 and 0.3 for NFBO and BNFM. And current Oran SC development aim to achieve automation framework. So test code contribute to Oran SC based on the NFB TST 0.0.10 called a named robot framework. And now TACA as a generic BNFM is collaborating with Starling X as an org cloud to demonstrate our technical concept in Oran SC too in HZ. And we continue to contribute Oran SC and OpenStack for further use case or run use case. And then so we will start the 5GC and run demonstration from Hiromu Asahina. Thank you. Okay. Like we said, we have shown the TACA can handle the both 5GC and VLAN in terms of LCM. So I'm going to show you the brief demonstration that the TACA can handle both LCM for VLAN and core network. And this is the picture of our goal. There are two infrastructure there, OpenStack and OpenShift. It's not Starling X, which means the TACA is not tightly bounded into Starling X. And with the head, OpenShift is like an edge cloud here. And OpenStack is act like core network cloud, central cloud. And there are two VNF packages, 3.5GC and UI run same. Unfortunately, there is no use of VLAN application nowadays. So we use UI run same, but we can get some clues to see the how TACA handle the LCM for both VLAN and core network from those two VNF packages. And we will deploy these two VNF packages to each infrastructure appropriately. So let's get started. Like I said, it's a goal. And the answer situation is like this, there is no core network VLAN. First, we register VNF packages for core network and instantiate it. It's a little bit small, but now we are registering the OpenStack as a VIM so that we can deploy VNF package into that OpenStack. And this is a VNF package content for 5GC. Then we upload that packages. It looks easier. And it's just registering the package. It's just uploading a container image to container registry like that. So now create the VNF. Now we create the VNF for 5GC. The create is just registering the VNF package into TACA DB so that the TACA can load that VNF package when instantiation happens. And now we instantiate that VNF packages, which means they are provisioning the 5GC on OpenStack. And it's accepted here. And as a data OpenStack, so what happened here is creating VNF on Nova as a Nova server. So we can see there is a stack created by heat. And the VNF is launching in the background. So we can see the resources are created here. And the servers are created here. So in the TACA view, it's a really simple way to provision the 5GC just uploading the package and just running the instantiation. And this is the created VNF package information which has already instantiated. So let's check the 3.5GC is working, actually. So it's working. And there is no UAs registered yet. So now we're going to make a VRAM. It's a URAM thing, like I said. And it's composed as a HelmChart. So VNF package contains HelmChart. TACA can handle, which means TACA can handle both OpenStack hot, heat, template, and HelmChart in the same way. So this is the VNF package for URAM thing. So maybe you can see some HelmChart or acquired TOSCA template here. And like we did in the previous step, we're going to create the VNF package first and upload the contents to that VNF package. So it's registered. Upload it. So there are two packages, 5GC and URAM thing, which is just uploaded. And again, we're going to create VNF and instantiate it. And the operation described here are the same as what we did for 5GC. So TACA appropriately abstract the operation for both HelmChart and HOT. Now we have instantiated URAM thing. You can see the state have been changed to instantiated. So let's check the HelmChart is successfully deployed. Yes, we can see the URAM thing is available from Helm CLI and also from OpenShift comments. So it's running. OK. So we have deployed both 5GC and URAM thing. Now time to register the URAM thing to 5GC and try to ping from URAM to 5GC. So this data is connected. And we're going to attach to URAM, which is running as a pod, and ping to Google, like we already did. Yeah. So it's working. So let's scale out it. So scaling it out. Here in this example, we're going to scaling out GNB, GNOPB from TACA. So first check the current resources. It's not scaled out. There's only one pod. So let's scale out URAM thing, GNOPB, and adjust one command like instantiation. Just specifying the VNF ID and sending a scale out request to TACA. Then TACA compared that request to the scale out request to 4HELM and successfully pods have been scaled out. Yeah. You can see the two pods for GNOPB. Okay. So let's terminate all things. So termination is just termination. So what we have to do is just running the termination commands to TACA. First terminate, which is this. 455CC, I guess. And yeah, 455CC as well. So we can see the status has been changed to not instantiated, which means it's successfully terminated. So let's check the resources are completely deleted from HALM CLI and OpenStack CLI. We need to do this in an actual situation, of course, but this is a demonstration. So we check the resources are deleted from CLI. So there is no resources. Okay. That's all. Thank you very much. And so do you have to explain more for that last comment? Oh, okay. That's all. Thank you very much. Any questions or comments? I'm Robert Juan. I have one question related to a VNF. If we have many different tenants with this VNF service chain, how do you deploy them for each tenant purpose? Thank you. Do you know the question? Because each tenant for me is one tenant. He is the other tenant. Then we may use different VNF functions. So you have a different service chain in the cloud. So how do you deploy the different service chain for each tenant? Thank you very much. Thank you for your question. Let me confirm that your question is how Tucker realized the multi-tenancy, right? Tucker's have multi-tenancy in Kubernetes for Kubernetes with namespace. So we can use namespace to split the tenant. So what we have to do is just putting their appropriate namespace when you translate the VNF. So by doing that, you can split the service chain by namespace. Do you find the order? Yeah. Okay. But if you will deploy that on the Kubernetes, you can split them by... completely split them by Kubernetes feature namespace. So I think it's not... Yes. You have to use the same namespace for your deployment and there, he has to use the his one. Okay. Yeah. Any other question? Are you able to leverage those through templating, through Tucker, to add those in? Because I'll have custom deployments for different customers, where I'm supporting them, where they have potentially a different firewall automation that they need to integrate in or a different egress controller or egress controller that we need to set up. Are you able to layer that stuff in through your Tucker deployment for the different namespaces or do you have to preset up your different namespaces outside of Tucker on that multi-tenant flow? I don't get that. So assuming you have your multi-tenancy and you're leveraging that through different namespaces instead of just two, I've got 17 or 30 or 70, right? Each of those tenants has to have different controls placed around it for network connectivity and for also internal RBAC. Do you... Is that done through Tucker or are you layering that in separately? It's layered separately. Yeah. Any other questions? Okay. If there's no questions, we will have this presentation. Thank you very much.