 Welcome to this talk about BNF onboarding. Last year, the OSM BNF onboarding taskforce delivered a talk in Shanghai about the basics of BNF onboarding and what we do in our team. This time, we would like to get things a little bit further with deploying a production-ready network function live. So first, an introduction to OSM and the BNF onboarding taskforce. Open Source Mano is a community-driven production quality end-to-end network service orchestrator for operators, focusing on deploying operational network services, interfacing with virtualization platforms, SDN controllers, and the functions themselves. By using OSM and since its information model is very rich and standard, operators can design their network services without worrying of the virtualization of resources and their underlying infrastructure. The platform is completely agnostic to the infrastructure. The complete lifecycle of network functions, network services, and network slices is covered from initial deployment to daily operation and monitoring. The OSM onboarding taskforce is a group in OSM that supports BNF onboarding. It is open to all OSM members. It maintains a very rich documentation about BNF onboarding. It also maintains a repository of BNFs starting the last quarter of this year. It proposes new features needed for improving BNF onboarding and also leads the OSM hackfest sessions content. BNF onboarding is all about simplifying network services deployment and operations by generating unique BNF packages, which model already contains all the necessary information and scripts to cover the instantiation of network services or slices, making BNFs manageable, what we know as day zero. The initialization of BNFs so they provide the expected service, which we know as day one operations, and the operation of the service itself by monitoring, reconfiguring, and implementing any needed closed loop action over it. Network functions virtualization will only scale if all of these functions can be automated, and this is especially true for 5G. In this demonstration, we will show how this would look like in real life by deploying an open source Evolve packet core created by the open source project MACMA. We will be launching a fully operational Evolve packet core. But first, what is an Evolve packet core? Basically speaking, it is the main platform that stands between smartphones and internet access, hosted by operators and managing all the control and data communications distributed by cellular stations. Its main components are the MME, the serving gateway, the packet gateway, and the subscriber database here provided by the HSS. In this particular case, we will be using an open source EPC called MACMA, which is very software oriented and distributed in nature. It collapses main functions in order to serve content closer to the end users, so it is comprised by two main components, the access gateway, or access gateways, which could be deployed all over a country, and the manager of these elements to the right called the MACMA orchestrator. So these two components are the network functions we want to deploy in an automated fashion, using an NFV orchestrator. With open source MANO as NFV orchestrator, we can deploy on top of OpenStack and Kubernetes the virtual machine-based and containerized network functions of MACMA, respectively. At the end, we will automate the deployment of the EPC manager CNF on top of Kubernetes, the gateway, along with a cellular station and a smartphone emulator to run some tests, all the configuration to have this EPC ready to receive subscriber connections, a high-performance interconnection using SRIOV, and we can even automate the configuration of a data center gateway and the switching fabric that interconnects components. Furthermore, and following the principle related to 5G, we can orchestrate many of these stacks by leveraging the network slicing concept and share one of these functions, the managing one, across all of them. Let's see this in action. We will log into the open source MANO platform and see that there are no packages here, no network service packages, no BNF packages, which are the ones that describe how the functions will be deployed. We do have an integration with an SDN controller. We do have an integration with a VIM, which is an open stack, and we do have an integration with Kubernetes controller, as well as an associated Helm repository. We also have integrated with an OSM repository for BNFs, called LTE BNFs, hosted at Etsy, and providing some network services and BNFs related to the MACMA EPC. We will also see in open stack, we just have a virtual router there deployed simulating a physical network function already there. We also have a Kubernetes cluster, where we can see that in our name space, we don't have any pod deployed. We also have access to the SDN controller, which is comprised by three physical switches connected to the infrastructure. Compute nodes are connected to these two leaves nodes, and there is a spine node interconnecting the leaf nodes. Through OSM's command line, we will see which packages the repo is providing us. We can see a KNF, we can see also a BNF package for the MACMA access gateway, and we can see also a gateway BNF, which refers to the physical network functions that is already deployed. So we have three packages to onboard. We can see also some network services, which are composed by the BNFs that we want to deploy. In particular, the Facebook MACMA network service, which is the EPC manager, and the Hackfest MACMA access gateway E node B network service, which is comprised by the BNF that includes the access gateway and emulator, and the BNF. If we want to onboard one of those on our system, it's just a matter of running the NF package create command for BNFs, and referring the repository where we want to download from which we want to download this BNF from. We can do the same thing with the rest of BNFs and network services. So this is a very powerful feature that lets us download instantly a network service package or a BNF package and upload them into the system and get them ready for deployment. In this demo, we are also using a network slice that integrates the two network services we have. So we are just going to drop and drop the template of the network slice. This template just refers the network services that we are deploying, the EPC and the EPC manager, and specifies which one will be shared by other slices. In this case, the EPC manager is the one that is going to be shared. Now we just need to deploy this network slice. Let's put a name to it, magmaepc01 over which beam, which also includes the Kubernetes cluster on top of it, and also some parameters that I'm providing at the instantiation time. In this specific case, I'm just passing an IP address that I want to use for the EPC manager. Create. And this starts the whole automation process. We will see in OpenStack that the virtual machines are starting to get deployed. We will also see in Kubernetes that the pods that build up the EPC manager are also being deployed. We can see that it's starting to converge. One special thing we're doing here in an automated way is interconnecting these two VMs, which are the access gateway and the user emulator through a high performance network that goes through a physical fabric. So these two virtual machines connect through SRAOV ports using VLAN 1063. If we go to the SDN controller, we will see some action here. If we click on one of the leaves, we will see that we have an entry, an open flow entry in this case, for VLAN 1063 that goes all through the fabric to achieve end-to-end connectivity between both elements. Let's see how our EPC manager is getting deployed over Kubernetes. We can see that it's ready and it's exposing a service for its dashboard. This is the dashboard of the EPC manager. We can see that it already created a network and it already integrated the access gateway. So, day one configuration, which is putting all the network functions operational, has occurred automatically. Now, back in OSM, we will see that the two network service instance, the EPC and the EPC manager, are both deployed. Furthermore, we can start testing some traffic. Let's emulate a user equipment. The emulator comes with some primitives available or some functions that can be used to emulate the attachment of a user equipment. We can specify here that we want to run a primitive over the SRS LTE emulator and attach a user equipment. These primitives are completely customizable. You can add whatever action you want with whatever parameters you want and those are associated with scripts that are already embedded in the package. So, let's put the values for the sim card, the testing sim card, which, by the way, has been already provisioned in the HSS. Now, let's go to the emulator machine to see if it has already internet connectivity. What we can see is that it already raised a tunnel. This is a GTP tunnel which goes directly to the access gateway that is going through the SROV network that is through the switches. Actually, we will be able to see the traffic through the dashboard of the SDN controller. Furthermore, we should be able to reach internet. However, we see that traffic is not passing. That is because there is a firewall here which we said that it was a physical network function emulated by a router blocking the traffic. But we also said that this is part of our network service and we have certain control over it. So, let's send a primitive to it in order to allow our traffic, BNF, and we select the BIOS PNF, which was already part of our package, Configure Remote, and it has a primitive that asks for the MACMA IP address to allow. Our IP address, in this case, is this one, the one in the SGI interface towards internet. We'll put it here and let's send a configuration using the Ansible tool to the router that emulates the physical network function. So, this traffic will work now. So, now we have end-to-end traffic and we demonstrated a fully operational Evolve packet core that was launched in minutes and completely configured. If we wanted to launch more instances of access gateways, for example, all over a country over different data centers, we could do so simply by launching new slices. Let's say a second Evolve packet core over probably another beam and passing the parameters for this second one. We can say that this will be a second access gateway with a different name. This will launch the instances related to the EPC only. The EPC manager will be the same because it is a shared function. Open Source Manu is capable of orchestrating the deployment and configuration of any set of network functions. It's just a matter of preparing the right descriptors with the right automation scripts to make this possible, and that is called the onboarding process. One of the most recent OSM features we want to highlight from the demo is the BNF catalogs feature, which automates the way packages are distributed. With OSM BNF catalogs, operators just need to run a command that connects to any given vendor's repository, which is not more than a web server with a predefined structure, for the package to go straight to the OSM catalog. So in day-to-day operations or when evaluating network functions, the operator can download NF network functions straight to the catalog, integrate them in a network service, and or network slice, and deploy it in the network. A process that regularly takes months, now it's a matter of minutes. Another feature that is worth highlighting is Open Source Manus SDN Assist feature. As you know, OpenStack by default deploys machines in OBS or Linux breaches for inter-node communication to happen automatically on top of an overlay network, usually implemented through VXLAN tunnels. However, when deploying NFV workloads, many data plane functions require high throughput connectivity, which is usually provided by exposing the virtual machine's interface to the NIC with SRIOV. Open Source Manus is capable of detecting these kind of connections and through an integration with an SDN controller or configuration manager that can control the switches, the physical ones, it can provide end-to-end connectivity without any manual intervention. We invite you to test this out and contribute with your ideas in order to enhance the usefulness of this example. To sum it up, Open Source Manus brings zero touch on top of virtualization platforms like OpenStack and Kubernetes. And this great of automation that NFV Manus platforms like OSM bring are a must when dealing with modular, distributed, cloud-based network services. Most importantly, a platform like OSM with rich BNF onboarding capabilities is the key for simplifying network services deployments in the 5G and cloud era. Thank you.